doi
stringlengths 17
24
| transcript
stringlengths 305
148k
| abstract
stringlengths 5
6.38k
|
---|---|---|
10.5446/57261 (DOI)
|
I see you in the channel. Hello everybody. Many apologies. I wasn't aware that I wasn't supposed to stop the stream. Anyway, we have Mark Bolen speaking next on curating machine learning datasets in international collaborations case study on the island of Bali. And so Mark, I will let you take it away. Thank you, Rob, and thanks for this opportunity. So this project is a collaboration between the University of Buffalo and the Indonesian Institute of Sciences, specifically PhD student, Santiao Tony Liu, Rajiv Iriyadi, and myself. And so my talk has a few different parts. I'm going to give selectively overviews of the problem. And then I'm going to discuss particular approaches we take to solve it. And then I'm going to try to contextualize why the contribution is significant for this community and for other communities, for example, social studies of science. So we have some prior work, related work, actually fairly recent, that deals with applying machine learning techniques to fields of significance in Southeast Asia that have resisted machine learning to some degree in the past, specifically ethnobotany as a field of applied botany, as well as the specific human uses of plants and context. And so why Bali? Well, there are a number of reasons why we're interested, or we kind of like use this as a study site, one is the luscious forests of Bali that are well known in films and through travel logs. And it's of course Bali is a vacation destination, but it's an interesting place where kind of like two worlds meet a first world, if we so want, and an emerging economy. And that kind of like condition has its own unique impact on attempts to manage the forests and care for the forests. Bali has been subject in the past to more summary attempts to classify and work with its forest assets. I'm showing you here a slide from a global forest watch. The body is very much on the radar and Bali has invested in GIS surveys to understand its land use, specifically in the city of Denpasar recently 2017, a large survey over over 500 square kilometers based on satellite data from Sentinel 2 and SAT 7. This focused on the urban fabric. Our interest now is to move a level up in complexity from the sensor suite, from the AI tools, and from the topics that we want to look at. So we want to look at multiple tropical forest categories and try and see if we can understand instances of contested land use. I'll get into that in a moment. So we were lucky to get a grant from Planet Lab and have now mapped complete section of the island from north to south in great detail and identified a study site, the square, dark square in the middle around body botanical gardens. So that's what that study site looks like with the Sentinel, the Planet Lab data, and the green is not a kind of Photoshop activity here. This is just RGB images. It's just luscious. So the lusciousness of this area is surprising at the same time, dense urban fabric or kind of like town like fabric interwoven with the lushness of the tropical forest that has continuing growing seasons. And it's just a complex environment. And so we start basically with coming to terms with the differences of land cover categories and land use categories in the US, say Western context and in Indonesian context. And it's interesting, it was kind of like an opportunity to think about the differences of concepts of land use and land cover. So cover the materials on the surface and the use, what humans do to it. But these are not necessarily so separate and it depends a lot on what you want to do. So if your goal is mining, then you will come up with different categories than if you want to, for example, think about new beaches to establish. And so there's a kind of like a cultural or intentional background to this categorization that underlies a lot of the basis of kind of like building these frameworks. And so that's one thing to think about. But at the same time, it's also a challenge because it becomes really active when we then go and look at the categories primary forest, secondary forest that are used in Indonesia and try to understand how they can actually be translated into, say, a geese workflow at the level of precision that we have now available through Planet Lab. And so the next slides are just going to show you kind of like a basic series of diagrams of how we proceed. They're more conceptual. And so we start with a cross section of the entire island, as I mentioned. These are 28 tiles collected in the summer of 2020. This creates for us a reference data set. And so as I mentioned before, we're primarily interested in the natural environment, the natural forest, the different categories of natural forest and the human interactions in those forests and how we can identify them and map those. So on a second pass, what we do, and this is maybe of interest specifically, is we add local expertise. So the team is US-based and in Indonesia, we can't travel. And so we work with two types of experts, locally GIS experts from Indonesia Institute of Sciences at this Rajiv and with local informants on the ground. So we send people out to survey certain sites that we cannot make sense of in our satellite imagery. And then we make use of kind of, so we have a Planet Lab grant, but it has a timer on it. So we don't have eternal access to this data. So what we're doing now is setting up a system, and this is a software solution by which we can train classifiers on our high resolution data and after it's make them amendable to test data that is from the Sentinel site. So that's kind of like a tip from, or from tip and queue combining complementary sensor systems. So we use the higher resolution imagery to create a base map and then compare results selectively with data from lower resolution, but freely available sources. That's what this diagram should do. And so how do you do that? This gets quite involved. We don't have time for this, but basically it has the trick is a segmented pre-training procedure where you fine-tune first on the high resolution data and then you fine-tune a second step on down sampled high resolution data that kind of like mimics the low resolution data and that's tricky, but set that up exactly. And then you get, then you can kind of like pipe in your lower resolution data. It's been done in the past. We're not the first ones to experiment with this, but it hasn't been applied to our knowledge to satellite imagery in this way. And so I mentioned we have informants on the ground that go and check things out for us, and why do we do that? We're working remotely and of course like everyone else, we make ample use of Google Maps and Google Maps has its great advantages, but also its serious disadvantages here. There's a rental car site in the middle of an agricultural field. That's an easy one, but other snuffers are harder to figure out. So this is our local informant. This is Gussi with his mom and aunt, and he lives in Bukian Village, the central Bali, and has been active in the tourist industry. The tourist industry has collapsed. So we're giving him basically a new job in this project. And so he's remunerated for his activities. He's not volunteering. This is actual small job. And we've worked with him for the Ethno Botany Project, and now he's helping us here. So we send Gussi to different places. So for example, here's a site that we could not figure out what the satellite was telling us and what the maps were suggesting. And so Gussi goes out and finds that this particular site is actually currently early stages of tomato fields. And it looks, of course, nothing like that. We don't know when Google Maps comes together and makes its data. And so there's this issue going back and then comparing with the satellite and thinking about when the image actually taken. And then you have really real-time data on the ground by a human being who's knowledgeable in agricultural practices. So here's another site that we sent Gussi to. And this is how we do it to make sure he's going to the right spot. And this is then what he sends us. So this is a strawberry field. And this is interesting for a number of reasons because Gussi doesn't eat strawberries. And actually Rajiv doesn't eat strawberries. So strawberries is actually a product made for export or for rich tourists who come by and lavishly spend their money in Bali. So in the process of verifying what we're actually seeing, we're getting background information about the social politics of how land use and practices occur in Bali that are very difficult to establish just by looking through satellite imagery. Another case of this kind of like a social conflict or potential for conflict that we encounter is when we looked at this site. So this is to the east of the little encampment between the lakes. And the first we couldn't quite figure this out. It turns out this is a golf course. And so we sent Gussi to have a look at the golf course. And he was not allowed to enter. So he could not make his verification videos for us because he was not a hotel guest. So we have all these kind of like, let's just say, hierarchies that the landscape or the lived landscape contains that we kind of like, as I say, wander over in our very efficient GIS practices. And when we go down in the land, we bring these up and make them give them agency. And to some degree, they also matter for the very practical task of land use classification. I'll get to that in a moment. And these little white spots are not canopies or what? So these are the sand pits, of course. And then the next part of our integration of external local knowledge is then kind of like with the GIS expert in Indonesia with Rajiv. And so we work together in QGIS to look at our data and try to figure out the categories. And this is really tricky. So I'm going to just show you a few seconds of how Rajiv, who has years of experience with this and consults with forest manager locals to kind of like really understand the different types of forest that he has, how he goes about. OK, so this goes on for a long time. These are like night sessions. It's seven in the morning and Bali in eight o'clock at night when we have these sessions. And it's interesting because he sees things in these sets that we cannot see. So we have to learn to watch and look at. He does. And then there's a back and forth. And so these three steps here, the most important one in that is the repeat. So this is a learning process where we're both all teaching each other how to do this in order to get the best possible possible results. And so now I want to get to some of the differences between these two data sets that we've produced and kind of like motivate the reason why we want to combine them in the way we do. So first off, maybe obvious, but the planet lab data is almost a factor of three higher resolution than the Sentinel data, as you can see in this example. And if we then apply a simple classifier, there's a maximum likelihood onto our first study site on the planet lab set and the Sentinel set, and that we see basically just with using five categories, the refinement that we can get with the planet lab. So we want to keep that. And yet we will not be able to have that access in the future. So making this link between those two worlds is really important. Also because we're working with a resource constrained environment, I mean, the Indonesian Institute does not have the kind of money to get this planet lab. So it's kind of like important for us to be able to establish this. Anyway, next, our experiment that we did was expand from five classes to 15. And at first we were very excited because it worked quite easily and then already here in the Balinese, Indonesian categories, working very diligently one night until we started to look at the results. And so it turns out that specifically the forest areas, the primary and the secondary forest, the primary forest is the untouched area that has not been disturbed. And the secondary is a much more complex area that has multiple forms of disturbance and regrowth and use. So it's kind of like a container and it's much harder to be specific about. But here's one like simple problem, but still that we have not solved it. This is simply the shadows from the hills that are then misclassified as an alternate category. Now let me see how much time I have. Rob, how much time do I have left? Okay, I can't hear him. I'll just keep going. So I mentioned that we want to apply our studies to kind of like a sociocultural context in addition to land views, kind of like land conflicts. And we have an example here in an area that has competing claims. So this particular area, it's just to the west of our study site is outlined here on the left side with the blue lines is claimed by indigenous peoples in Tumblingan. And there's an overlap with an area and that is in pink. That is a natural reserve and that's managed by Indonesian authorities. And so what's interesting here is not only that they have an overlap of conflict of like whose land is this, but they actually share what they want to do to some degree. So these are both entities that want to protect the land, but they have very different understandings. So what that means and who has access rights and what kind of like ancestral rights would be valid and what the land use specifics would be, what kind of plants you could grow to replace secondary forests and the third iteration so on and so forth. So it's a complex condition that is currently being hashed out in the courts. And so our notion is that the services that we're trying to produce here would be amendable to these environments that you would not have access to this kind of information. So not necessarily just GIS experts, but people who can profit from the insights that you can deliver. And so what kind of insights are we talking about? So here, for example, and this is very preliminary, so this is not final in any way. June 2020, we started to like look at this area here. So this is in kind of like in the contested area up here claimed by the peoples in Tumblingan. And if you look here, you see that there's actually a small, quite a reduced amount of secondary forest here. And so you see that in the light yellow, right, based on our current classification abilities with these five classes. And if we then go one step further and compare that with data five years ago or four years ago, we really do see that something was going on in this top left corner, right? So and it turns out that the differences are in fact much smaller than in the other areas. So this particular area experienced some change in activities that we don't understand. And so Rajiv attributes it to multiple factors. There's change in human activity there. So deforestation to some degree, but there's also natural events here that there's water runoff, the erosion effects that combine to create a signature that's lumped together. And we cannot distinguish that with our current methods and we would need additional information to parse that out. But the point is we have now like a tool or an opportunity to go back in history a bit and possibly offer some insights to these two parties that might in the end help them resolve differences. And in the end, of course, lead to more robust care practices for the forest. So I want to thank our sponsors for this project. It's Microsoft Research and Planet Labs. And if you have any questions or want to join the group, we're actively looking for people to help. Or if you have a particular different context where you think this approach might be interesting, please let us know. Contact me anytime and we'll chat, take it from there. Thank you for your time. Ready for questions. Awesome. Thanks, Mark.
|
Curating machine learning datasets in international collaborations – case study on the Island of Bali State of the art environmental datasets often combine satellite-based remote sensing information with data collected by humans in the field. This poses unique challenges to data collection and curation, specially if these materials are to be made amenable to machine learning processes. And the task become more challenging in international collaborations across language differences, cultural barriers and economic gradients. This talk will present an overview of ongoing work situated on the Island of Bali that seeks to build a machine learning compatible dataset on ethnobotany collected on the ground in combination with land use data collected via satellites. This project is a collaborative effort between scholars from the US and Indonesia, as well as data collectors on the Island of Bali. The goal of the project is to make use of the synergies between remote sensing data and field data to better understand how local communities are in fact using their lands, and how tourism is impacting already limited resources on the island. State of the art environmental datasets often combine satellite-based remote sensing information with data collected by humans in the field. This poses unique challenges to data collection and curation, specially if these materials are to be made amenable to machine learning processes. And the task become more challenging in international collaborations across language differences, cultural barriers and economic gradients. This talk will present an overview of ongoing work situated on the Island of Bali that seeks to build a machine learning compatible dataset on ethnobotany collected on the ground in combination with land use data collected via satellites. This project is a collaborative effort between scholars from the US and Indonesia, as well as data collectors on the Island of Bali. The goal of the project is to make use of the synergies between remote sensing data and field data to better understand how local communities are in fact using their lands, and how tourism is impacting already limited resources on the island. Authors and Affiliations – Marc Böhlen, University at Buffalo Jianqiao Liu, University at Buffalo Wawan Sujarwo, Indonesian Institute of Sciences Rajif Iryadi, Indonesian Institute of Sciences Track – Use cases & applications Topic – Data collection, data sharing, data science, open data, big data, data exploitation platforms Level – 1 - Principiants. No required specific knowledge is needed. Language of the Presentation – English
|
10.5446/57262 (DOI)
|
Good morning, good afternoon, good evening to all of the people that have logged on to this session. Today we are going to have a couple of great presentations in this room. So first I'm going to introduce Mr. Taro Bukawa and Mr. Hidenori Prijimura. Taro is a senior geospatial expert at the United Nations Geospatial Information Section since 2019 and has been working for vector tile deployment in the UN and this is his first post-4g meeting, welcome and Hidenori is from Geospatial Information Authority, the National Mapping Agency of Japan and he's been working in the field of web mapping and he loves vector tiles so welcome, welcome both of you. Nice to see you here, I'm just removing myself from the session and the floor is yours, thank you. Thank you Kan for introduction, so let me try to share my screen. Okay can you see my slide? Yeah we can see you I think. Yes let me take then let's start. Thank you John for your kind introduction, so good morning and good afternoon and good evening to you all. I'm Taro Bukawa from UN Geospatial Information Section and I'm so happy to talk about our vector tile implementation effort. The title of my talk is Deployment of Open Source Vector Tile Technology with UN Vector Tile Toolkit. You can see the list of all sorts are from the United Nations and GSI Japan, they are my great colleagues and I'm really happy to work with them. Today I will talk the first part then Hidenori will talk at the latter half. Okay then let's get started. Okay first introduction, let me first introduce my key idea geospatial information for Beta World with Open Source GIS bundle. I started working for the United Nations since 2019. We now have the geospatial strategy for the United Nations that emphasizes effective, efficient and universal use of geospatial information in support of UN activities for a Beta World. In addition I'm joining in the UN Open GIS initiative aiming to identify and develop an open source GIS bundle for UN operations. UN Vector Tile Toolkit is a part of this initiative and these two are important background of our effort so let me first introduce it. Okay then about UNBT or UN Vector Tile Toolkit. As we can see at various occasions now Vector Tile is a powerful tool, powerful method to efficiently deliver our map product so we are working on it. UNBT is a collection of open source software to produce host style and optimized vector tiles for web mapping. Mr. Hidenori Fujimura started it in 2018 and we have a lot of UNBT partners not only from UN. Its main goal is to facilitate the production of vector tiles by public organizations for their base map dissemination. These two kits are not like a single software but it is more like an assembly of script, method, know-how and others. Okay then here comes the main topic of my talk, UNBT deployment effort in the United Nations. Our colleagues in UN Global Service Center in Brindisi, Italy are providing geospatial information and services to UN colleagues including a lot of colleagues in the field mission. They are using the base map made from both open street map and UN internal data. We believe we can support their work and contribute a lot so our project aims to deploy an open source vector tile method in their geospatial operation. And as I said the target of this project is to deliver our base map using vector tile. Both OSM data and UN source data are already stored in post-data base at UN Global Service Center. So our target is to efficiently develop vector tiles and keep them always updated. There are a number of UN mappers who kindly edit open street map of UN mission area every day. So for us it is important to achieve frequent update of the vector tiles. We found that the combination of Node.js and the existing open source tools were crucial. For your easy understanding, first I want to share what we made so far within our project. Number one, we have the vector tile of the whole globe. Number two, web map styles. And number three, vector tile servers for internal use. The server is still in the development environment and we need to work more on the authentication process. And I will share our experience in the following slide. Okay, first at the vector tile production, we are using open source tools. Main tool is Tipi Kanu, a great vector tile conversion tool by Mapbox. And we developed some Node.js script to realize vector tile production and updates. From the source data, GeoJSON sequence is directly forwarded to Tipi Kanu. We do not use any intermediate files, then we get vector tiles in MbTiles format. Our conversion is done area by area and we use SQL query to let post this return the data within the targeted area. And we use Node.js to edit attribution, zoom levels and layer names of each records before they go to Tipi Kanu. We use more than 800 spatial modules for data production and conversion tasks are regularly done as a scheduled task at our conversion server. Our method is open and you can see it at our GitHub repository. Address is written here. So please feel free to have a look at them. And regarding the product, we have 841 MbTiles covering whole globe and it is about 150 gigabytes in total. If we convert the full globe at once, it would take 35 straight hours. But thinking about the priority area, we update some parts of the world daily and other regions are updated weekly. This strategy also helps us reduce the burden to the source server. So this is one of the good way, I think. Next about styling. Many MbTiles use style 5 based on the MbTiles specification. So we have prepared style 5 based on that specification. At first, we used MapTunic, which is a great tool for easy styling. But as the number of layers increased, we now use Hopon or a human optimized config object notation to efficiently edit the style. Style 5 can contain more than several 10,000 lines and sometimes it is hard to even confirm the list of style layers without Hopon. So we prepare a number of configuration files. Then we can just merge them to create a single style JSON. With Hopon, all configuration files can be edited with a text editor and understood by human. Very easy. And some tips from our experience of styling. Because our users may use our vector tiles in various applications, we have applied our style in several map libraries and tools, both open source and or proprietary software. I am so happy to see a various tools now support the vector tile consumption. However, we found the necessity of customizing the style for each tools library because each of them understands style information in slightly different ways. We saw it at the advanced expression, color description, font reference, etc. So we do not have a lot of time today, but please contact us if you are interested in detail about this. Okay, then at the next one, at the hosting phase, we also developed a simple vector tile hosting service. If we have the vector tile in pbf format, I think static hosting is good enough. And we indeed sometimes use GitHub homepage for other projects. But this time, we are using mv tiles format to efficiently manage a vector tiles of huge sizes. And we established a simple hosting server using Node.js Express. Then hosting server delivers pbf files derived from mv tiles. We also stored the style, map symbols, text fonts, and others needed to draw our web maps. We still have our server in the development environment, but for the future, we need to add some authentication. And we also work to add Azure AD authentication to our simple server. And these efforts are also uploaded to our GitHub repository. Next, a little bit different topic. Let me talk about the knowledge sharing. In order to share our vector tile techniques with UN colleagues and colleagues outside UN, we had a series of workshops and exercises. Many of our colleagues use GIS, but some of them are not so familiar with vector tiles. So some of our exercise materials were for beginners with a lot of explanations. The slides and materials are released from the internet as much as possible so that our colleagues can learn our vector tiles activities and techniques. And we have a small announcement. On October 22nd, we will have a workshop on UNBT storytelling. We can invite anyone. So if you are interested in joining, please contact me. Okay. That is all from me. But before I hand over to Hidenori, let me summarize my part. First, so far, I have shared our experiences of deploying UN vector tile toolkit for vector tile development project. And second, I feel that making starting hosting vector tile is not that difficult thanks to our open source tools. They are great. And third, if you are interested in our project, join us in our activities. Thank you for your attention. And then I'm very happy to invite Hidenori to show the latest UN vector tile toolkit example in other projects. So Hidenori, over to you. Thank you. Thank you so much. Hello. And thank you, John, for the session. And switching from New York to Tokyo, I'd like to introduce some expansion of UNBT project in Japan. So yes, we are working on several projects, what kind of sub project based on UN vector tile toolkit. I would like to introduce three examples, OptoGeo, EV, and FTES. And first of all, I'd like to introduce OptoGeo project or AdoptGeo data project. This project is to adopt existing open geospatial data and convert them into vector tiles because there is a lot of very useful data, geospatial data, but not yet in the vector tile form. And I actually have 130 repositories. Most of them are really experimental, but I am converting a lot of open geospatial data into vector tiles. And yes, Taro, could you move on to the next slide? And I would like to introduce two examples of this AdoptGeo data project. First one is the adoption of the geospatial data actually from my organization, Geospatial Information Authority of Japan. We provide topographic map data and also we provide landform classification data, but it's still in experimental vector tiles or geogia some vector tiles. So I'm using a UN vector tile toolkit to make it a better one technically. And I created this example which covers Shibuya in Tokyo and this covers, yes, this shows some landform classification in this area with 3D building data. Please, next slide. And some other example is adoption of point cloud data. The Shizuoka prefecture in Japan is kindly providing all the point cloud data to the public as open data. And I am trying to convert this point cloud data into actually vector tiles, which is in the form of the box cell data, which is actually the square data, polygon data, but has the height information and I visualize it as a kind of box cell form. And yes, this is a really new project, but I am really satisfied that we can see the power line landing in this box cell data. I think this can be useful if someone is trying to fly the UAV data or drone in this area. I hope we have some more opportunity to use the point cloud data for more real-time purposes. And next slide, please. And I would like to introduce some kind of project. I call it EV. This is a kind of subclass of OptoGeo, but we are trying to convert Earth observation data and I, yes, actually I convert it into vector tiles and I also use it with other vector tile data. And because I created this project because I think the Earth observation data is really hot topic and have a lot of opportunity when we use it with vector tile data, especially map data. Next slide, please. Okay, and this is an example of the data from JAXA, Japanese Space Agency, and they provide high-resolution land use, land cover data, and naturally I combine it with topography data. And I think this can provide some new perspective about this land cover of our area because the combination of topographic data and remote sensing data can be more exciting to see. Okay, next slide, next, please. And the last one, yes, I would like to introduce tile service because of our experience that we need to, of course, naturally have a good computing environment if we'd like to produce vector tiles or host it or optimize it. So we are making some small project to talk, work about the tile service, like in other commercial products. And the next slide, please. So, yes, because our project is the open source project led by public organization, not private company, we would like to cover the service, sorry, service perspective in a really better, no, not better, but more free form, free as in freedom. And we would like to cover two aspects, not only tiling, but also hosting. And I would like to make this available to everyone who would like to have that tile service. And yes, the next slide is how we can implement that kind of service. I use the Raspberry Pi to host vector tiles. The left one is something I use in the work from home environment. And we use it in our organization too. So the trick is that I can connect a small hard disk and we can host our vector tiles in a small computer. Next slide, please. And yes, this is how it works, but yes, I can skip it to the next one. Can you go back? The important thing is I am working with new partners, one is Japanese Antarctic Research Expedition, 63rd with the Wintering Party, because Antarctica Internet Service is not so really good. And the second partner is Furu Hashira Borati in the Agua Yama Gakuin University. And I'm looking, I'm really excited to work with our new partners. And yes, yeah, next slide, please. And we are working on a GitHub. And yes, please come to the GitHub page, if you are interested in our project. Thank you so much. Okay, thank you. That's all from our presentation. So, John, we can go to the question and answer if there is anything. Thank you. Yes, thank you. Thank you. Arigatoございます for the presentation. So, there's a question in the chat. So, I'm going to ask it and please the audience, if you have any additional questions, just write them so I can read it to our presenters here. So, is there a single map style or can the map style be edited by third parties? There was the question in the chat. You pass it a little bit, but maybe more details. Can we have our own custom map style if we use that to get as well? Yes, then let me go first. From my experience working in the UN, there is no single styles. And you can freely design your map style. And as I said, we already did this vector type. So, you can freely add your own style. And using a hook on, you can easily edit your style with text files. Then you can convert, compile a new style. Then if you refer to that style, you will see new map. That's experienced from me. So, Hidenori, do you have anything to add? Thank you so much. Yes, because we are working on the base map, I think we are using more complex styles than topic, sorry, thematic data. And we have a lot of colleagues or friends working on the style issues. And we are using a hook on, but some of our colleagues are trying to use YAMU so that we can describe our style in a simpler way. So, I think we can introduce some examples of how we can manage the style. And of course, yes, we welcome new way to do a better styling. Thank you so much. Okay, thank you. Lorenzo Stupke is saying thank you and saying it's very interesting use of Raspberry Pi. I'm also quite interested in the idea. And if I have a chance, I want to try it out. But I kindly, please ask you to share the links if you can through a venue list for the audience to be able to reach that. Yes, I would like to share the URL. Thank you so much. Thank you. I think we don't have any questions from the audience. So, if you don't have any additional comments, we can finish up. Let me just add something. We had some questions about styling. And if you'd like to learn more about styling, or if you'd like to make stylefile by yourself, please visit our materials. Don't afraid. If you are beginners, it's okay because our materials is for beginners. You can learn styling together. So just please this exercise material. Then if you have questions, contact me or contact Hidano. That's all. Thank you. Okay. Thank you. Thank you, Taro. Thank you, Hidano for the presentation. For the audience, feel free to get in touch with them, find them in the venue list and try to have a chat. That's it. If you have any further questions, so thanks a lot for the presentation. We are going to in five minutes continue with the next presentation. Thank you. Thank you. Bye. Bye. Have a nice day.
|
The UN Vector Tile Toolkit (UNVT) project started in 2018 and it has been developed as a part of the UN Open GIS Initiative which aims to develop an Open Source GIS bundle that meets the requirements of UN operations. The toolkit includes a set of Nodejs open source scripts to be used with existing and proven open-source vector tile software (such as Tippecanoe, Maputnik, Mapbox GL JS (ver. 1.x) and vt-optimizer). This talk will introduce an example of UNVT deployment at UN and other examples including vector map delivery using Raspberry pi. After development of the basic toolsets by early 2020, we started developing an open source vector tile web map service in UN. At each phase of the vector tile development (i.e. data conversion, styling, hosting and optimizing), UNVT was utilized to proceed the process efficiently. At the first phase, the production phase, we have converted the vector tile of the whole world and updated them weekly with the developed nodejs script and Tippecanoe. The source data is stored in PostGIS data base and extracted by tile by tile due to its large data size. At the following styling phase, in order to efficiently develop a style fie, a hocon file was prepared for each style layer, then compiled into a single style json. At hosting phase, we have developed nodejs based simple vector tile hosting server which deliver the pbf files derived from mbitiles upon each request. Recently, UNVT has been used even outside the United Nations. This talk will briefly introduce such examples as much as possible. The UNVT was first introduced at FOSS4G 2019. Our toolkit is now released from the following our GitHub accounts: https://github.com/un-vector-tile-toolkit https://github.com/unvt Authors and Affiliations – Taro Ubukawa (1) , Diego Gonzalez Ferreiro (2), Paolo Frizzera (2), Oliva Martin Sanchez (2), Hidenori Fujimura (3) (1) Geospatial Information Section, Office of Information and Communications Technology, United Nations (2) Service for Geospatial, Information and Telecommunications Technologies, United Nations Global Service Centre, UN Department of Operational Support (3) Geospatial Information Authority of Japan Track – Use cases & applications Topic – Data visualization: spatial analysis, manipulation and visualization Level – 3 - Medium. Advanced knowledge is recommended. Language of the Presentation – English
|
10.5446/57263 (DOI)
|
I'll hand over to Enrique and we'll look forward to hearing his talk over the next 20 minutes or so. Okay, thank you, thank you very much, Michael, for the introduction and well, I'll start with the presentation then. So as Michael has commented, I came by accident to the DIS and we are trained to join both goals. Indeed, in my company we have separate sites which are working for government applications and DIS and now we are bringing everything, trying to bring everything together and we work worldwide. So let's go with the presentation itself to say that this is a project that we have developed to the project. And sorry, it's okay, I think now. So let's make a little bit recap. This application is an application made to produce maps in a format that you can handle them, to print them, to have a PDF which in a sense may seem a step back but I will try to convince you and to share our perspective on why it has been made because it has been a win-win approach. So in the very beginning, I'm doing a rough evolution of the special data tools. You had of course the maps and papers, you know, they were drawn, they were produced with hard applications to draw it, a Photoshop, you know. And then the databases or special databases came, we went to the web mapping and then the OTC standards and services appeared here and that was a game changer when there were standards, then we could all talk the same language. So there was a boost on the use of GIS, I think we all agree, and they were spread all over the organisms and used by everyone. Then it has continued evolving and evolving with the styling, 3D, mobile apps, you know, all the stuff that we already have and we already use. It's not an exclusive, everything is useful and that's the evolution of the ES world. So now what we have, we have the technology, we have the means and we have tons of data, we have tons of data that we have to represent, make graphics, make 3D analysis and so on. We have very useful maps, we have cool maps with drawing in very, very different styles. If you have vector styles, you can do really marvelous things with labeling, with putting things on top of each other. We have maps on demand, we have everything. But the thing is what if now we could think on enhance or provide a different or a new approach with all these technologies that we have to provide you with, to provide users, citizens, people with maps, traditional maps, understanding them as a map chart that you open and you can use physically when you are going on the field or any place. Not only you can have it better in the mobile, you can zoom in, zoom out, but people who like to go in the field will know what we are referring to. So what if we could take advantage of the existing technology, cloud optimized, your tips to provide a configured experience where you could choose a zone that you are going for a weekend or you are joining with friends and doing some trails or you can give a gift to anyone who gives a map. It's a gift sometimes. And use the official data that the Spanish mapping agency offers, providing you very, very detailed info. In many cases, info that is not strictly available in the usual channels for the general people. I mean, it's not in Google. If you are going to the fields, you will know what happened. And you can also upload your own data. Okay, what if we use these things together to make a very simple application that can provide a very added value to certain use cases, very specific, but very, very added value. So what if we could close the circle? We started with the very beginning. We have the maps, the printed maps as usual. But using all the evolution of the EIS, we could configure a simple application that you could use to produce your map. So you could be provided with customized maps in digital format and also physically you could have a folder mapping paper with detailed cartography that you could use a professional and as I've commented before, an event, a gift, a trip, a weekend, whatever you could use it to play with. And we've very detailed cartography and official data from the Spanish mapping agency. So we created a simple application to put, we had all the pieces in the Spanish mapping agency. We have the data and we needed to put all the pieces together in a single application. So here we can extend the questions a little bit more because I'm going very in detail in the business case, but the technological part has some nice things. As the cloud optimized your teeth, using Yidal you can produce them and then move it to map fish as the producer to render of the map to generate a PDF. And also you have some background services with your server using WMTS, WFS and you can also upload your own data in shape file format, KML to draw it on the map. So finally you put all these pieces together with a nice and easy to use front end. Remember that we are going to evolve a little bit the production of paper maps so it does not have to be very difficult for the user. So we have made a simple application where you can put your mapping and then you get the result and you can order it. So finally you get the map. Everything all these pieces have been put together with a React application and using a layer which is called API Sync on top of OpenLayers to manage the map production. So you can configure your map, you can choose your area of interest, the scale of the map that you want to use and make it personalized and adding your own data. You can put there your tracks, you can put there points of interest, whatever you have in shape file and you want to make a trip and you want to put the several roads or the trail that you are going to follow. You can put it there and you have it in paper. So and you choose the format and we will see the in the live demo. So once you put everything together what you can do is just click next, next, next you will see it's very, very simple and then download it as a PDF distributed as you wish or you can order also a printed copy, you can order it folder, roll it, waterproof, whatever. So let's see the connection. As I've commented before the main purpose of this is not actually to sell maps but to offer you the possibility to give that step back. You have all the information there, you have all the data, official data, nice maps, you have the chance to draw your own trails and if you want this format, let's say you have a professional format and you want to use it, it's very useful and it's useful for you. You can produce it, download this PDF for free and print it at home or at professional service or you can order it and you will get it at home in a few times, in a few days. So let's go to the, to show you the application in action. As you can see it's, get your map customized, you have two ways to do it, you have the quick map and as you can see you can just focus it on wherever you want it, you can move it or you can type in and look for any place that you want to and then center on it. After that you will see, you can, you can zoom in, you can choose the product that you are using, the scale that you will get printed with, okay and you will get which of them you are choosing in the format. This is the quick way to do it. And we have, and then you go next and you have it ready. This is, well this is high resolution printing so it will take a little bit but you can, you can get the PDF or you can order it, okay. If you get the PDF it will take a while but in the end you will be able to download it and then well use it as you wish. Then we have a different, we have a different, sorry, a different, a different application interface for the users where we can do a better, let's say not a better but a more elaborated, more elaborated map where you can put all the, all the data that you are, that you are going to, to share. If you see the, the design of the page has changed a little bit. There you, you start typing the place that you want to focus, want to focus in and once you are there you are able to upload it in KML, shape file, JSON format, whatever you have you can upload it and you have it there for, for your map. You can put one trail or a couple of them or whatever, okay. And then you can see you have it there but well they are both blue. That's if we can, we can change this style to adapt it to, to our needs. So we can put it dashes, arrows and different colors for, for each of them. For example, let's, let's choose it as a red one and let's put the other as an orange or something. Yes, okay, better. And then we, we have our tracks there but then we can use it to play as I, as I meant before if we are going to, for a weekend we can, we can put the hotel or, or any point of interest during the trail that we are doing to observe birds, to, well drink water, whatever we can, we can also freely draw on the map. So once we configure, we configure all, all the items that we want in the, that we want in the map that we are going to produce, we can continue. For example, adding the squares or if we also want the coordinates and as I have commented the scale and the focus that we are, that we are going to use in the, in the production of the map. So once we have this everything's centered and so on, then we can, we can upload the, we can, we can edit the, the title, the subtitle, the author if we want to, we can put different backgrounds in the, in the color and we can also change the image. We have a lot of images there to choose, but we will put an image if we are going with our partner to, to there and we want to, to make a trail there, we can, we can do so and upload some, some specific photo then you have a personalized map that this is the same interface that, that we've already seen where we can generate the, the PDF that can take a little bit or, or there the, the map there ready, ready to go. So once we have it, we have the link which will be accessible for, for seven days. So you can share it with, with someone and then we have also the, we have also the file to be able to, to open it. And as, well as you will see, it's a heavy one. It's about 120 something because you know it's very, very detailed and, but you can see that the resolution is really, really high. We zoom in, we see the, the detail that it's very, very good if you want to, to go on a trip and have it as a support. I only recommend this, it's sure that we can do it with, you know, works maps or, or whatever applications that, that you won't want to use with your mobile, but maybe I'm too very old fashioned that, but it's really, it's really nice to, to have the paper maps back and the use in them. Okay. You can already folder, folder or roll it and with a kind of waterproof, waterproof treatment. You can, you can then use it and it will be shipped to you. Well, enjoy it. And what this is it about the, about the presentation. So what I wanted to, this is what I wanted to, to share with you that is not, that is reflecting on how we are using the, the whole technology. This, all the stack of technology that we already have to give a, to rethink on the production of, of maps and the use case that we can have to, to use them. So thank you very much. And Michael, if you have any question from, from the audience, please let me know. Thank you very much, Enrique. That was great. I really appreciate your talk and, you know, a good explanation of their, the relevancy of hard, hard copy maps, even, even in this digital world that we live in. May hop over to the question. There are a couple of questions in there. First one is the map content adapted according to the map scale zoom. Do you use multi-scale data and or automated generalization processes? Let me see a little bit of a ground is the, the, the sound. You meant if the data that we are showing is treated in advance, you know, there is a generalization before, before this? Yeah. Basically, is there sort of automatic scale dependency in the cartography that's chosen that there's the generalization at certain scales? Yes. Yes, yes, yes. I see they are, yes, they are already the, the data is producing certain scales in, in advance as we can see. Let me share it. Well, I do not have it, but I can tell you that yes, one, one, two, let me get there because it's in the, here you can choose the, the scale of the map where you want to, that you want to use. There are different products produced at a certain scale for a certain resolution and then you have the scale that you want to apply to, to your map. So once you zoom in there, you will see which will be the scale, the actual scale that you will be applying and you will see it beforehand and you can allow it to, and see if it fits to your need or if you want to, to change to a different scale. Okay. Great. Sorry. How about a couple of people who have asked if you could share your, the URL to the, to the demo site? Well, yes, it's a, it's public. How could I, well, you can see it here. If you put it, yeah, if you put it in the, in the chat window for Spinaard, I can move it into the, okay. So let me check that I'm pasting something. It's really useful. Yes. And then I will hand it over to you. Done. Okay. Yeah. So you can have a look at the, at the, at the website to get the inspired or use it. Oh, there it is. And then the last question, when printing on the fly, how fast is it comparing it to other solutions? When generating the PDF? Well, actually, we have not many choices. We did a work bench at the beginning of the project, but we had already used the, the math fish printing. So we, we use it, we use it as, let's say, as a, as our chosen library with regard to the, to the comparison to, to another, to another libraries. We have not made a benchmark comparison because we were good with the times that we obtained. I mean, if you are ordering it, well, you, you will wait a couple of days until it arrived. And if you are producing it in PDF, it will be a minute or a couple of minutes until you, you have it there. It's not instant, I must say, but it's, I think it's enough for, for the use case, most of the people. Just one more question. I'm just looking at the last two that have been added. Are there any ideas to continue developing the product? How has been adoption by others? Yes. Well, and the same client is willing to include more and more services in this, in this catalog, let's say of maps that is there, but of course provide more, more capabilities in terms of uploading info and adapting it and so on, will be needed, will be needed, will be developed. But please consider that this is a, for that, you, if you are a very, very interested user, you will have your QGIS, you will have your own, your own tools and you will be able to, to build it. And the use case for, for this application is for all that people that really has not that kind of expertise, but they do have the interest to, to have a map to, to go for a weekend. So it can be involved mainly in the, from my perspective in the sources that can be used. So for specific needs, they could have specific layers because that could be useful for a person to choose. But then if we put more and more tools, then we will cross that part, that border where we offer maybe too much for the, for the use case, but has to be considered in any case. Great. Well, thank you very much, Enrique. There's clearly a lot of interest in your talk. I apologize that we didn't, don't have time to get to all of the questions, but to the people who ask questions, feel free to reach out to Enrique on the social channel. You can, I am directly through the application or look him up on LinkedIn. Okay. Send him an email. So thank you again very much for, for your presentation. And we're, we're going to go into a sort of pause mode now. I'm going to remove Enrique and Tiappa, Leo Bo, our next speaker. Bye. Bye-bye.
|
Nowadays we are used to consume cartography through the screen of a computer or a mobile device by means of map viewers, so that we interactively select the portion of territory we are interested in explore through a continuous territory without the traditional sheet limits of paper maps. Based on this current way of consulting geographic information, taking advantage of the possibilities offered by new technologies and preserving the essence of paper maps, the CNIG (Spanish National Centre for Geographic Information) has developed the on-demand cartography project "Mapa a la Carta” (“Map on Demand”), which highlights its cartographic information and its integration through OGC services. It is an application where the user can configure the map to their own taste and needs, allowing the choice of the portion of territory that the map will contain, the scale (within a range) and even the personalisation of the title and cover of the map. It also allows drawing points, lines and polygons on the cartography that can be labelled, or inserting other geographic data such as those that can be registered on a route on foot by means of a GPS, or other types of information downloaded from the Internet in different formats. For the development, a solution has been designed consisting of several Open Source software components. The front-end, an intuitive environment programmed in React JS, interacts with the spatial reference information by consuming the map services provided by the API-IGN, the IGN Search geographic name searches, as well as the OGC WMTS visualisation services of IGN cartography. This set of services allows the user to define the conditions of the desired map that MapFish will finally generate, in high resolution PDF format. It is very important to highlight the integration of the information with COG (Cloud-Optimized Geotiff) formats for high-resolution printing using GDAL. With all this, we go from being users or readers of cartography to creators of new maps by reusing resources, and having the digital product in PDF format in a matter of seconds with the possibility of sharing it among our contacts. Finally, to provide the solution with added value, there is also the option of having the map generated in standard or resistant paper format and with professional printing quality to be purchased in an online shop. url: https://mapaalacarta.cnig.es Technologies: WMTS, COG, GDAL, MapFish, OpenLayers, Mapea, React JS, API-IGN ________________________________________ Authors and Affiliations – Centro Nacional de Información Geográfica - Celia Sevilla Sánchez Developed by Guadaltel Track – Use cases & applications Topic – Standards, interoperability, SDIs Level – 2 - Basic. General basic knowledge is required. Language of the Presentation – English
|
10.5446/57264 (DOI)
|
Let's do that one. I won't be able to see questions while I'm presenting, but I'll get to them after the talk. My name is Robin Manuel. I'm a geospatial architect for Microsoft on the environmental sustainability team in AI for Earth. I want to talk to you today about Phosphor G and the climate crisis. First off, I want to talk a little bit about Microsoft AI for Earth and why we're the diamond sponsor of Phosphor G this year. I know if I was an attendee a couple years back, I might be like, oh, Microsoft, that's an interesting choice. Maybe have a little bit of some thoughts about how Microsoft interacted with open source back in the first decade, in the 21st century. Under the leadership of Satin and Dala, Microsoft owns GitHub and is a big prominent open source software. But specifically around Phosphor G, I'll answer that question by going into our sustainability work. Microsoft is committed to using its technology, its cloud, to build a more sustainable future. Those commitments are specific and there are four specific areas. In 2020, we announced that we were committed to being carbon negative by 2030, and not just that, by 2050 to have removed all the carbon it's ever emitted since its founding in 1975. Committed to being water positive by 2030, and that means that Microsoft will replenish more water than it consumes on a global basis. Committing to zero waste. Our goal is to achieve zero waste for Microsoft's direct operations, products, and packaging by 2030. And also to protect our ecosystems. Committing to protecting more land than we used by 2025. And also as part of this, building and deploying what we're calling a planetary computer. And that's what I work on. I'm an engineer on the team building out the services around our open data and analytics. And all of that is built on Phosphor G. It's very specifically built on an open source and really stands on the shoulders of the giants of this community. So really excited to be able to sponsor this conference and to give our support. And really happy that I can be speaking with you today. But I'm not going to be talking about the planetary computer today. If you're interested, go ahead and see my talk tomorrow where I'll be going over some of the technical details of how we're utilizing open source software and the architecture of planetary computer. What I want to talk to you today about is the climate crisis. So I am very privileged to be working with a number of very smart scientists that are really close to the science of what's going on with the warming climate. And so I wanted to just bring some of the lessons that I have learned, you know, learning from them to this conference. And specifically around some of the results that you can take away from the IPCC report recently in August. The first section of AR6 was released. Some of the misconceptions will be released over the coming year. But this one focused specifically on the physical science basis. And I'll just, you know, ask Dricks like, I'm not an environmental scientist. So this is some of my interpretations of what very smart people have told me. So if I'm incorrect, please correct me in the comments. Go ahead and yell at me. So one of the things, the sort of key takeaways is that, you know, 1.5 degrees Celsius above pre-industrial average could be reached within a decade. And it's more likely than not to be breached even in the scenario where we get to net zero by 2050. There's an aggressive cutting of carbon as well as, you know, removing carbon from the air more more likely than not to breach 1.5 degrees Celsius. And that number is sort of important because in 2015 during COP 21, there was the Paris Agreement that was entered into force in 2016. And that had a goal of, you know, keeping global warming well below 2 degrees Celsius, but preferably below 1.5 degrees Celsius. And it looks like we've already, you know, five years later sort of missed that mark. And then so this isn't part of the IPCC report, but other research, you know, current policies have us missing that by a wide margin where we're on track to reach three degrees Celsius of warming by 2,100. And so it's important to note that the warming, the number that signifies the sort of mean warming of the planet does not mean that like every location will just have that warming, right? It's generally larger over land than in oceans. So based on a location, the warming that's experienced will be significantly different. And you can see in the north, it seems like the warming that would be experienced is a lot greater. And so another thing that was kind of a key takeaway from the IPCC report is that the tipping points are presented with greater confidence and concern. So abrupt potentially catastrophic and irrevisible changes are presented with more specificity and confidence than in previous reports. One example of this, and there's some other research talking about early warning signs around this overturning circulation. And yeah, these things could potentially collapse, disrupting rain. The farmers relying to grow food in Africa and South America making winter's more extreme in Europe and further destabilizing the Greenland ice sheet and the Amazon rainforest. Other examples of tipping points highlighted in the report include rapid melting of the Antarctic ice sheet, permafrost collapse, and permanent forest loss. And while the report considered these events with a low likelihood of happening this century, it also reports with a high confidence that they can't be ruled out. One sort of pretty stark takeaway that hit me was that we've likely already hit 1.5 in warming, but we're also emitting aerosols into the air that have a cooling effect. So we're only experiencing, we're only observing 1.1 degrees Celsius that warming because of the aerosol emissions. And it's important to note that greenhouse gases stay in the atmosphere longer than aerosols. So if we were to stop emitting both greenhouse gases and aerosols tomorrow, the aerosols would disappear faster than the greenhouse gases. And so in effect, we would experience warming due to that removal aerosols in the air. But it's not all doom and gloom, there's indication that rapid aggressive emission cuts coupled with carbon dioxide removal, a lot of the stuff that Microsoft is working on being a purchaser in. There was just a nature article published around our processes for trying to really invest in the carbon removal market and do large purchases of carbon removal. But that type of work could limit warming well below 2 degrees Celsius, even if we would dip above or breach 1.5 at some point and then dip down to lower the last part of the century. But I think it's important to talk about it not necessarily as climate change that's happening in the future. This is a climate crisis that's happening right now. One of the things that the IPCC report has been able to take advantage of is the science that's gotten better at attributing weather events, climate events to human-induced climate change. And so they're able to do that for things happening right now. And we can see that the number of hot extreme events has increased already with high confidence. The number of observed change and heavy participation has changed and then also agricultural and ecological droughts. So this is something that's already here. And so I just want to take a moment to say the magnitude of this problem can be super overwhelming. I know that I feel very overwhelmed by it often. And especially with everything that's going on with the pandemic and just the world, it's kind of easy to get kind of discouraged about the outlook. But I want to say that we in the foster G community are positioned to make a difference. I really believe that geospatial data is a critical, irreplaceable asset in the work, not just the work that's going on now, but in all of the future challenges presented by the climate crisis. It will be important. There's the saying that you can't manage what you can't measure and all of the measurements at a planetary scale deal with location and time. And the sensors that we have up in space looking down on our planet are collecting data that's irreplaceable. Like there's nothing else that can kind of give those measurements. So it's really important data in dealing with this crisis. We've made leaps and strides in being able to utilize that data, but I think we have a really long way to go. This isn't sort of a new idea, right? In 2018, keynote Chris Holmes at Foster GNA had a great talk called Towards a Quarieable Earth. And if you haven't seen it, I recommend going. It's still very relevant. And one of the key points of this where he lays out sort of a blueprint of how we go from data to something where you can query anything on the planet is this last part, which is that the GIS and remote sensing is abstracted away. And so I think this is a really important point that we're still very far from, right? Be as geospatial experts, as the people using and building Foster G, open source geospatial software, are the ones responsible for handling that complexity, the complexity of the data and the complexity of the processes that are inherently geospatial. And unless we're delivering something that abstracts away all that technical complexity, we're not quite there yet. And I really believe that the work we do now to make this data and technology easier, faster, more efficient, won't just help with the current challenges, but also set the future up for success. And I think that's really important to keep top of mind because while the climate crisis is here and we're already feeling the effects, the people who will be really dealing with it might be several decades out. And so what can we be doing now that sets those people up for success? And hopefully they're not still scratching their heads about how to find and manipulate geospatial data. That'll be a solved problem. So yeah, a question would be like, what do we build now that would be most useful in the future? We're very far away and who knows what will happen? Who knows what the needs will be? So I think this is, nope, there's not a real answer to this. But what we can do is look at sort of what are the lasting technologies that have been around in previous decades that are still useful today. And I'll just highlight GDAL. The initial release was in June 2000 and so two decades later, it's used everywhere. It's still just so widely used. It solves a problem that is a problem today just like it was a problem in 2000. And so I was having a conversation with Frank Warmerdam, the founder of GDAL, several years back. And I asked him, why did you start GDAL? And he said in a paraphrasing, he was like, well, I looked for the problem that nobody wanted to work on that everybody had. And everybody was having issues with data formats and reading and data in a consistent way. And it was hard work and it was boring work. It wasn't the glamorous work that was going on. But I decided to take that on and solve that problem. And then the community that's built around GDAL continues that to this day, solving those sort of low level IO problems that everybody has. If we didn't have GDAL, it would be a lot harder. But we can solve it sort of in a general way that lets people concentrate on the work that they're getting to. And then so I think that's a huge lesson that I've carried forward. And I hope maybe you can take some inspiration from as well to build lasting solutions, focusing on those hard, fundamental, and potentially boring problems that are keeping everyone from doing more interesting work and shouldering that. And so I think that's what we need to be doing, not necessarily that it's boring work, but there's important foundational work that we need to be doing. And I like to think about it in these four areas where focusing on the data that's available, how do we get those data into the proper formats to be cloud optimized and ready for use, the access, how are people accessing that data, how are people searching for the data that they need? Analytics. There's a wide variety of analytics that you'd want to apply to this data once you find the data that you're interested in, including combining different data sets and how easy is that to do. And then finally, applications because the insights derived from analytics from accessing the data don't mean much unless they're applied to impact specific decisions or optimize certain processes or ultimately manage Earth's natural systems. So I'm just going to talk about data and access today because of time. But so yeah, there's a wide range of types. There's optical imagery, which I think we've got a pretty good handle on. I think Landsat being put up as a cloud optimized geotest was a huge leg up in that effort to make raster data, imagery data really accessible and easy to use. But there's other types, synthetic aperture radar, hyperspectral imagery, point cloud. And all these types are in various states of having cloud optimized formats. And some don't even have a really good standardized format. With SAR, what's the file type for complex amplitude and phase? I'm sure there's that exists, but I don't think it's nearly as far as along as some of the other data types. So we need to get everything into a format that is solid and cloud optimized. And so some recent work out of Howard Butler's group to do the cloud optimized point cloud is really awesome and exciting and puts us along this trajectory where the more things that we can get into a format like take Cog as a leading example, a format that works with existing tooling, but also allows for cloud optimized reading. I think it's really, really important work. And so I just want to highlight this paper that was just put out very recently, uses open vigorous data analysis of the current state. And it sort of highlights the point of why cloud optimized. Well, given the growing amount of data volumes, cloud-based services seem the only realistic way forward for data providers and users. So that's that, but there's another section that talks about the survey responses and who downloads data and it's everybody. I mean, we're still downloading data, which is not the problem of the user. It means that it's not easy enough. It's not the quickest way to access data yet. I download data, like everybody does it. And we need to be moving to a future that working with the data in the cloud is just as easy. And we need to make that choice because it was the easier choice. And then so just because data is available, there's a file somewhere on the cloud, doesn't mean that it's accessible. And so I think there's been a leave some bounds towards this, but it still remains a challenge to find only the data you need across all the different data types. And so standards like the Space and Temporal Asset Catalog and the suite of API standards from OTC help enable an ecosystem of tooling to be able to speak the same language and give greater access to these data types. And I think that there's a lot of work to do to not only continue fleshing out those specs, but also building out those tools. So if there's no searching for what you need, it's clear how to access all these different data types. So I'm going to end with another sort of quote that I carry with me. And it was January Macomba, a minister of state of Tanzania who in the 2018 Darsalam keynote said very bluntly, if geospatial tools and data do not serve humanity, then they are simply toys. And I take that to heart because there's a lot of really cool things we're doing in the geospatial community, but we need to be connecting it back to the impact that it's having. And I think finding a way, even if it's a small way of connecting your work back to setting the future up for success in dealing with the climate crisis is really important work that does serve humanity. And I know that a lot of us know that and we're trying. And I just want to recognize that and thank you for your work towards that and encourage you to continue. And a lot of people who are doing really awesome work towards that are speaking in our track today. So I encourage you to stick around. And I know I'll be learning a lot from the lineup of amazing speakers. But yeah, thanks for your time. I'm going to switch back to checking comments now. Okay. I have to get the heck out of this real quick. Move that. Thanks. Yeah, as Steven mentioned in the chat, there is an open call for grants that in collaboration with Geo. There's a new one that specifically around how to utilize the NICV data from Planet. And actually, Tara Shea will be speaking about that program later. So I'm really excited to hear her describe that data set and looking forward to collaborating with the participants of that program.
|
The Paris climate agreement targets limiting our global heating to 1.5°C above pre-industrial averages. However, there are reports stating that this goal is now "virtually impossible"[1] to achieve, and recent research claims that the planet is already committed to over 2°C[1] of global heating. The effects of this level of warming will be widespread and hard to predict, but one thing is clear - our ability to process and analyze geospatial data in order to monitor, model and manage Earth's natural systems will be key to responding effectively and intelligently to the challenges humanity will face over the next century. We in the open source geospatial community have the opportunity to build the technologies that will be critical to mitigating and adapting to the rising effects of climate change. In this talk I will challenge our amazing community to use its vast talents and capabilities to work towards supplying the future with the data and tools it needs in the fight to protect our Earth. [1] The risks to Australia of a 3°C warmer world | Australian Academy of Science [2] Greater committed warming after accounting for the pattern effect | Nature Climate Change The Paris climate agreement targets limiting our global heating to 1.5°C above pre-industrial averages. However, there are reports stating that this goal is now "virtually impossible"[1] to achieve, and recent research claims that the planet is already committed to over 2°C[1] of global heating. The effects of this level of warming will be widespread and hard to predict, but one thing is clear - our ability to process and analyze geospatial data in order to monitor, model and manage Earth's natural systems will be key to responding effectively and intelligently to the challenges humanity will face over the next century. We in the open source geospatial community have the opportunity to build the technologies that will be critical to mitigating and adapting to the rising effects of climate change. In this talk I will challenge our amazing community to use its vast talents and capabilities to work towards supplying the future with the data and tools it needs in the fight to protect our Earth. [1] The risks to Australia of a 3°C warmer world | Australian Academy of Science [2] Greater committed warming after accounting for the pattern effect | Nature Climate Change Authors and Affiliations – Rob Emanuele, Microsoft Track – Community / OSGeo Topic – FOSS4G for Sustainable Development Goals (SDG) Level – 1 - Principiants. No required specific knowledge is needed. Language of the Presentation – English
|
10.5446/57265 (DOI)
|
So, hello Maximilian. So the next presentation is phospho GIS based on high-frequency and interoperable lake water quality monitor system with Maximiliano Canata for Italia. Maximiliano is professor, a cheater of geomachics and the Institute of the Earth and Science for La Escuela Universitaria Profesionale della Switzerland Italiana. I think my Italian is very good. So the University of the Applied Science in art of the suit of Switzerland. So Maximiliano, I tried a lot of the next phospho GIS. So Maximiliano, the stage is all yours for your presentation. Thank you. Okay. Thank you very much. And yeah, I'm going to talk to you, describe you a project that we are running that is related to the high-frequency monitoring of water quality parameters of a lake. And this is an interactive project. It means that this project between Italy and Switzerland. In fact, one of the aspects that we want to approach is that there are three different lakes that are really close in the area and that are monitored from different institutions. And also the national authorities that define the policies are different because one in Switzerland and one is in Italy. And the idea was to try to homogenize the monitoring to have a comparative potential between the state of the lakes from the water quality point of view, since most of them are interconnected from a biological point of view. And then the second thing was try to make some sort of innovation and bring some changing in the management aspects of this water quality. And yeah, this is the topic, so it's about the management and the monitoring of the lake ecosystem, so of water quality and also of the ecosystems. One of the things that we have seen in this sector related to immunology, so water quality and the ecosystem of lakes, is that they are still in a phase where they're moving in digitization and they're trying to move into digitalization, but this is not really happening so fast. So the real topics there is also how we can move into this digital transformation in the lake ecosystem management. So try not just to use digital data, but to extend the concept with the sort of digital twins of the lakes and try to use it as a reactive system that can provide insight to the managers to adapt and use a more adaptive policy in the management of the water rather than just passive and define it. So we have to deal with two, somehow different, but really link the topics. One is about the data management and the other one is about the environmental monitoring. The issue with the digital transformation has different reasons and what we have noticed, like in most of the cases, we have disconnected systems that are managing spreadsheet and people are not used to take advantage of the new technologies, but we see that there is a lot of opportunities and we try to push this digital transformation. Where we start from, we start from monitoring campaigns that run, in this case, I'm talking about the Lake Lugano, which is in Switzerland, which is monitored by my institutes, my colleagues, the immunologists, and they go once a month with a boat on the lake, they use some sensor that they put in the water, then they perform some measurements, manual driving, water sensor perform it, they collect some water samples that goes to laboratories and then we get back some analysis from labs. And this is a regular campaign over the year, once a month, and we have data for more than 50 years like this. Of course, you see that in these ways, it's almost impossible to react to different phenomenon that can happen in the lake with the short-term time interval rather than one a single month. There are different options that are this new solution, one of them is the remote sensing, but I'm not talking about this, this part has been addressed by the Polytechnic of Milan and CNR of Milan, while I'm talking with the automatic high-frequency monitoring system, which is basically a sort of weather station that monitor the water quality in the lake. What is the state of the art in this high-frequency monitoring system? We can bring a couple of example, one is on the left side, is a laxplora, is a floating laboratory on the lake of Geneva, Switzerland, this is like a unique research opportunities because this is very big, very rich in sensor and things like that and it's very expensive. I don't know exactly, but certainly more than hundreds of thousands of US dollars. On the other side, there is also some experiments of a smaller voice system, a lower cost, this is more oriented to specific needs and is more a customized solution done by Palanza. From the data management side, we are dealing with data that are currently managed in access database or multi-DB file, Excel file, text file, etc. And there is really a lack of uniformity in data formats, ontology, interoperability, there is error-prone processes in copy and paste and that integrity is not guaranteed and there is also data latency between sampling and then data availability. So there is a lot of potential that we could try to exploit using, integrating with some interoperability standards and metadata schemas and add in fair principles. While we are still there, there are a lot of possible answers and you see here probably is a matter of cost also, but also a matter of digital expertise of the personnel and also sort of resistance to change in the usual resistance that you can have in the digitalization process. Our question is how can we somehow make the management benefit of these advantages of the digitalization and can we really integrate a fully open software solution that can address such a problem and force the digitalization of the water sector and how the system might help in taking the local effects, for example, of climate change. So it's not just about monitoring but then also enabling the usage of this monitoring to take some, for example, climate change efforts. We proposed an open integrated system with starting from data sources, some preprocessing and some storage in database and then offering as standard services and we wanted to make this everything integrated and automate so that also the immunologists can easily access the data and elaborate and take decision. The project applied experimental testing methods so it's quite straightforward. We designate a study, we elaborate the state of the art, we design a solution, we develop the solution and then we make some testing, preliminary testing of the solution and then deploy in the fields and then we start to analyze what we can get from this solution and evaluate the results finally. Now we are in the stage, you can see in the box at the end of the second year and beginning of third year of project so we have deployed the system and we are in the phase of developing data analysis and taking some preliminary conclusions. The methods that we wanted to access as an example for climate change is how can we estimate the primary production of the lake which is the promotion of primary organic and this has been generally estimated with these monthly campaigns using the C14 easel top and this is a expressive procedure, this is somehow dangerous because it is also radioactive, etc. And as I said you have 12 values here so we started to look into literature and we find out that there are modern systems and models that can make an automatic estimate based on sense of measurements. We start to design the architecture and we combine several open source technologies, the source for the data management which is an OGC software, Grafana for the plotting, MQTT broker for accessing data from the sensor and then Key Clock for authorizations and access management and then we implemented some also some new configuration services and this is from the software point of view so the data management then we have also the monitoring part and we design a system fully open based on Raspberry Pi and that uses NBIOT as a communication protocol and that connect with the standard sensor that you find on the market and we designed this system with the idea to be replicable as much as possible. The approach in the data management that we follow is a two tiered data flow so some of the data collection is on the edge, let's say Booy or Lake platform, I will show you later some pictures and then there is on the right side in green the server side so that are collected locally but are directly at the age checkered for their validity before being inserted in the source instance so at the edge on the Booy at the sensor side we have an installation of the software that manage the data so that we can take advantage of the services we can access data with the standard formats and we can also use their integrated data quality assessment so then data from the raw data we perform some aggregation and data check before transmitting so when that arrive at the server that are already pre-flagged with some quality index that we can decide how to use then this data. When they arrive at the server of course they are still available for using the standard data, the sensor observation service standard for validation of the data, for generating reports, analysis and alerts for customers etc. So somehow we move part of the data quality on the edge side. We have implemented a dashboard and in particular we have developed some data imported for historical data because one of the things that we wanted to also is to put together the new data sources that comes from the sensor with the traditional methods so that we can combine the data and make take out the maximum output of that so we have implemented some automatic importer for the data that has been used and we have also together with the other partners we have defined what is a common ontology of the parameters and uniformize the unit of measures of the different indicators and parameters that you want to access. As you can see we have different type of data in the data management system which is a repeat based on resource. You have in the box the data from the sensor in the fields so these are real time data but you still also have 50 years of data for example from 72 to 2020 for a different type of sensor and monitoring. With the dashboard take advantage of Grafana you can have different plotting of the data of historical selecting and filtering by location filtering by time with the server property and etc. You can visualize the profiles with the recently added new feature in the source to manage profile data and also you have a dashboard so you can also see time series of a single sensor in a given depth for example and see the different properties that has been measured and then you can activate some comparison and evaluation of the data. So now we have the data we have the sensor in the field collecting data historical data and then we have to implement some processes and so some modeling of this data so we have implemented an asynchronous processor so within this source we have created extended this source to create these asynchronous processes so you can define a sort of new procedure so a sort of new sensor and new values that is being calculated every time the original data source get new data so it's quite easy you select the process that you want to simulate you select the sensor or the data that you use as an input every time your input data get some new data then a second chronologically this process are activated and this is a reactive approach to make the compilation. This is the small platform that we have deployed on the fields you can see the reason a solar panel that provides energy with a battery unit for backup and then the main unit with the Raspberry Pi in the center some solar regulator voltage regulator and etc. And I repeat all of these he's build up using commonly fundable pieces on the market and is fully replicable so everything is open. Together with the deployment we started to keep the maintenance lookbook but because one of the things that we want to see is how after running the system in production how does it work, does it break so often, do we lose data and how the system performs. So we had some small issues but so far we are quite happy with the system we are collecting a good number of values. We did some quality control in post processing using to test a plausible value in a step test to verify if the data make sense and then we implemented some algorithms to estimate this lake metabolism, this primary production as a shade and this can be estimated from observation of oxygen, dissolved oxygen in the water at different times and we can make estimation of the gross primary production because during the night the respiration is not active so you can do some differences between night and day and estimate. I don't go into detail of the algorithms and the equation of the model that we have implemented and integrated in the system. These are some primary results of the data quality for the data completeness for example we can see that 98% of the data are available so it means that we have lost some data but just a few data in this estimation was done by the 8 month of January to August 21. And this is also from some specific parameter for oxygen that we want to use, we go also to 199%. You see that these two QI 101 and 103 are these two different quality index that relates to the data quality. From the solar panel point of view you can see that the system is well dimensioned and we have still space to add new sensor and consume energy. This in fact we plan to extend the sensor on the platform adding other sensor to monitor the chlorophyll and to detect algal blooms in the lake for example. These are some time series that are the inputs for the primary production estimation so we have the dissolved oxygen in a different depth and you can see that we have continuous data and there is an effect of wind speed and temperature and so on the radiation of course. And the minimum result of the estimation of parametric production is quite positive. In fact we can detect a different trend from the seasonal trend that we want to expect from the winter and starting from the summer time where primary production increase and also comparing the values in the table from historical data that has been monitored with the other system monitoring the C14 are in line so we are quite confident that we can somehow change the methodology to monitoring the primary production having less invasive, less expensive methodology and having high frequency data. And this is somehow the point where we wanted to reach is how you can use all this open stuff and all this new digital technology to improve the way you are doing and have new information to solve the challenges. So I'm coming to the conclusion and the system is working without any major issues. The cost of the system is about 15,000. This really depends you have to think that almost 10,000 is about the mooring of the platform. And so the cost of the sensor is 5,000. In fact we are thinking for the future try to minimize the system and bring it to the boy. So we have new integrated system which collect data and put together all the data of the limnological sector and make them really available for analysis. The data are available using a standard, the sensor observation service and we have new digital application of data. So we are creating new digital value. For the next year we have to evaluate more in detail the cost of the maintenance of the system. We have to deploy more system, more sensor and of course we are continuing to enhance the system in general like from the software point of view and from the hardware point of view. And thank you for the attention. So thank you Maximiliano. We have at the moment one question and I think you respond the first part of the question but let you know see if we have some considerations about this. In general terms what is the cost of the infrastructure from monitoring I think you just answered this part. So there is some limitation for the application in another ecosystem like Trapkau Lagoon or Reservoir. So is there some limitation about the application? Okay, so from the cost as you said I have already answered more or less is 15,000 the cost but really 10,000 was about the construction of the platform. Why we did select the platform because we wanted to deploy more and more sensor for testing, more for a scientific approach and being able to have a site where we can perform more development and testing. If I would go in production I would try to minimize the system but in general there is no limitation for the application in another ecosystem and Trapkau Lagoon probably you will have more production of algae so the maintenance of the sensor should be more often with respect to Alpine Lake but in general there is no restriction. The sensors are widely available software and everything is there. Okay, we have some questions here. Is the public access of the data in this case you present? Okay, all the data not so far we are implementing a web interface for the users because such kind of data are quite let's say academic so are available for the general for the network of academic person and experts but the administration didn't want to open up everything to the public because this information may be misunderstood and can generate some alarmism for example for production high production of algae in the lakes or things like that. So we are developing a web interface to show general parameters which are easy to understand by citizens but the data are available on request. Okay, the next question is how you transmit the data from the fields? MBIOT the answer is we tested with Laura we started with Laura protocol but then soon we realized that the bandwidth is too low to transmit such amount of data so we migrated to MBIOT now. Okay, the last question is what kind of the signal you get for the sensors? Mostly 0.5 volts digital output. Okay, so let me see if you have another questions. No, okay, thank you Max Miliano for your presentation. We have a lot of we have some of hello for your friends here. So your time is up I want to make a short pause for drink of water or drink of tea and we can go back with Willie Gautier in the next session of this morning. Thank you Max Miliano and we will see you soon in the social gallery in Fosford GIS mapping. Okay, thank you so much. Bye bye. emoji Google episode 15 episode 15
|
Lakes ecosystems are exposed to growing threats due to climate change and other anthropic pressures. For example, water warming is predicted to favour harmful algal blooms (HABs) that are toxic to peoples and animals. In addition, warming tends to increase the thermal stratification of lakes and reduce turnovers, which can lead to oxygen depletion in deep layers and release of toxic gases (methane, hydrogen sulphide) from sediments. Similarly, the increased use of plastics has produced nano- and micro-plastics pollution which, together with anthropogenic micropollutants, is posing a new emerging risk factor to lake biota. To effectively study and manage those issues, researchers and managers need monitoring data (observations) to derive effective data-driven management policies. Observations have traditionally been collected from limnological vessels through periodic (often monthly) monitoring campaigns, during which water samples are collected for further analyses in the laboratory and various measurements are performed using on-board instruments (e.g. CTD sonde measuring Conductivity, temperature and Depth or Secchi disk to observe turbidity). However, environmental issues including HABs and changes in lake stratification due to warming, call for a shift towards monitoring approaches that allows higher-frequency (e.g. hourly od daily) automatic collection of key water-quality properties (e.g. phytoplankton concentration, temperature, dissolved oxygen). Therefore, to match current challenges, leverage better phenomena understanding and activate proactive measures, monitoring systems have to be updated to provide a better temporal and spatial resolution. At the same time, this development should not increase the costs of monitoring, which are often a limiting factor in lake management. Authors and Affiliations – Massimiliano Cannata proffessor of geomatics at the Institue of Earth Science, SUPSI, Switzerland. Track – Academic Topic – Academic Level – 2 - Basic. General basic knowledge is required. Language of the Presentation – English
|
10.5446/57266 (DOI)
|
So welcome back to the next talk here at FOSFERG 2021. I welcome Daniel Villavueltores and Andres Armesto from Buenos Aires. So we go to Argentina now where our conference takes place. And we want to hear from you about FOSFERG tools for data-driven decisions. And you both are geeks and you are data scientists working for the Buenos Aires city government. And you have a big mission as you announced. So your mission is to turn Buenos Aires into a data-driven city. And you are part of a team leading digital transformations and innovation initiatives. And we are very excited to hear from you now what's going on in your project. So let's start. The stage is yours. And I will add the slides. And yeah. Okay. Thank you very much Astrid. And so I hope everyone can hear me well. I want to welcome all of you to Buenos Aires virtually and to and hopefully everyone's having a good FOSFERG. I certainly know that I am having a good time right now. So welcome to our presentation. It's titled Data-Driven Decisions Public Transport Services in Buenos Aires. And it's about the case of bus operations key performance indicators. So maybe just a little rundown of what we're going to go over today. We're going to first give you a little introduction to the city of Buenos Aires, to some of the challenges that we face as city officials. And we're going to focus a little bit on one specific, a few specific challenges. And then we're going to show you a little bit of the methodology that we developed using FOSFERG tools to face the challenges that we face daily. And finally we're going to close with a few lessons that we learned along the way and some final thoughts. So just to give you a little bit of an introduction to ourselves, we, as Astrid said, I'm Daniel Villarroel. I'm a data scientist at the government of the city of Buenos Aires. And I'm a huge geogig. I have a degree in geomeformation. So this is what I love to do and I'm having a blast being here. And with me is Andrés Andrés. You want to take it? Yes, of course. Well as Daniel said, I am Andrés. My current role is data analytics and visualization manager also in Buenos Aires city government. And I am also a volunteer in engineers without borders Argentina. As Daniel, I am very happy to be here with you today. And I will start to tell you a little bit about the context. First you may or may not be familiarized with where we are right now, but this is an overview of the world and our position alongside it. But the white area is Argentina and that yellow shape is of course the city of Buenos Aires, where we would be talking about. A little bit more about the stats. The surface of the city is more than 200 kilometers squared. Its population is around 3 million people. And in terms of GDP, it's a very important place both in a national level and in a regional level. And now I will do a brief introduction to our data strategy inside government. Our area was created around two years ago and the focus of it is to turn the government into a driven government. So for that we have four lines of work. Three of them can be seen as pillars where we focus on data governance, data analysis and of course making data available both to inside users, to users inside government and to users outside government for instance, citizen or even some actors in the public sector. And adding to these three lines, of course we have the challenge of turning the organization into a data driven culture, which of course it's more focused on working with the people and not only on processes or analysis. And now I will tell you a little bit, I will dive into our use case which as Daniel said is the Busb BASP operations KPIs and we wanted to start by telling you a little bit about the initial situation we faced. First of all, as you may have known already, public transport services are one area which usually is associated with high amounts of subsidies. So that makes a very important thing to focus on when you are managing budgets and everything inside government. Then indicators like when you have to manage of course a budget, you have to understand what things are important and in this case there are laws that determine that the public services are necessary to, let me rephrase, the indicators allow to calculate the proper amount of subsidies. So in this case it's very important to have reliable indicators of the operations to calculate the amount of subsidies involved. And the third one situation we faced at the beginning is the available data. We used data that wasn't specifically designed for analytics purpose but rather to manage payments and clearing the financial accounts. So that was one of the things we faced. Yes, now I will talk to you about some concepts we use along the presentation and we thought it's important to understand the process. When we talk about bus trip I wanted to give you some context related what is a bus trip for us. Of course you have a starting terminal where buses start its trip and an end terminal. In the middle of course there are different stops where passengers can come up and down and all of this has data associated to it. Those in the starting time, the end time and the intermediate time of course. And if you see the dotted line that would be also coordinates in a database. And one of the things that are really important to get proper KPIs is the trip identification which would be like a number or an ID that would tell okay this is the beginning and this is the end of this trip. A little bit more about KPIs. If you search for them there are many of them but we focus on these seven and mainly because those were the one involved in the calculations of the subsidies and the ones our area of our users which was also an area inside government for calculation. As you can see most of them are related to the supply side which would be like the transport infrastructure. But we also had data from the passengers usage which is very important because you can understand how the different citizens use that services. Now I will tell you a little bit in concrete about the challenges we faced. First as we are a service area we provide data services to other areas of government. So each project requires for us to understand the problem domain. And in that process of course it takes time and meetings but it was one of the issues we faced. The second one is big geodata. Prior to this project we didn't have any experience in the team at handling big geodata. And the third one is as I already mentioned the unclean data. We knew that there were serious issues of quality so we had to develop a methodology robust enough to handle it. And now my friend Aniel will tell you a little bit more about it. Thank you Andrés. So everything Andrés just said is really important to keep in mind going forward with the presentation and I'm going to try to be as clear as possible. What I want to show you is some of our maybe thought process that we went through designing a solution to this problem of defining individual trips in order to calculate KPIs. So keep performance indicators of transportation in the city. And I'm going to show you just a little bit about what the data looks like. I'm going to talk to you about the tools that we use, the phosphorgy tools and a little toy example of what the process looks like. So this is what our data looks like in general. So just imagine we have hundreds of thousands and hundreds of millions actually of GPS records per month. So every GPS record is aligned on this table and this means that every bus in the city of every transport line, etc., is generating a record every four minutes or so. So it's a lot of data. And this is what we mean by big data in our example. And in an ideal world, we would just have to use these fields called file ID and maybe direction in order to solve our problem. So file ID supposedly means an individual trip. But if this were a field that was correctly created, we wouldn't be here presenting you this presentation. So we really couldn't trust these file ID and direction fields. So we had to develop a methodology that took into account the geographical aspect, not just the registers and this field. And these are the fields that we talk about when we say geo data. So these are the geo fields, latitude and longitude, so the position in space and also the position in time of our GPS points are what we mean by geo data. And these are the fields that we ended up using and making a use case out of this so we can, well, not so we can show you, but we're glad that we can show you what we did here. So this was the data. It's big and clean geo data. And what about the tools? What we used, since our data was so big, we couldn't use our regular tools that we used to, like regular Postgres database with PostGIS that many of you for sure are familiar with. So we had to go to Spark. And if you don't know Spark, there are entire conferences on Spark that you can maybe go. So I'm not going to go into too much detail, but Spark is sort of like an analytics engine that allows for cluster computing. So computing big chunks of data in different computers, not just one computer. What I do want to talk to you about is GeoMesa. GeoMesa is a project, it's a fast for G project, and it's a suite of tools that allows us to make Spark understand geographical data. And it's an actively developed project. Probably just maybe a half hour or an hour ago, there was a state of GeoMesa talk that was really exciting. So this is really being developed actively and it's got a bright future, we think. So what we did, we used GeoMesa in order to let Spark do two things. Understand geometry data types. So not just your regular integer and character types, but let it understand points, lengths, and polygons, and some derivations of this. But also what GeoMesa does is it allows Spark to use geometry functions. So spatial functions that allow for computation on these geographical geometry data types. So we can grab a whole chunk of a collection of points and make a line out of them, but I'm going to show you in a bit. Or we can make queries based on the geographical location of these data, or we can process, make some calculations of distance, etc. So this was the tools, and I want to show you a little toy example of what we use the tools on this data for this project. So first of all, what we had to do was construct point geometries from GPS records. So transform this table of data into actual points on the map. And what you're seeing here is an example of one day of data for one bus line. So these are all the individual GPS records. So the second thing that we had to do was find which of these GPS records were near the terminal. So that's how we defined when a trip started and when a trip ended. So once we knew when and where a trip started, we were able to construct line geometries, like I just mentioned in this slide before. So we took a collection of points that were semantically one trip, so that meant one start and one end, like Andres showed you before. And we constructed line geometries from these. And these were our individual trips. And once we had these trips, we were able to know when a trip starts and when a trip ends. So on this information, we can aggregate it over every bus line or every month or every day or every other some other characteristic that we can think of and calculate these different KPIs. So if you know when a trip starts and when a trip ends, you can know its duration and length. Or if you know what time of the day or night, you can know if it happened at night or not. And that's a different indicator that we were asked to calculate. And we can also make some calculations on the demand side. So how many passengers were actually using these buses on these individual trips? And here is a little animation of what the process looks like. So remember what you're looking at is one bus line in one day of January of this year. So remember that this was calculated for every bus line that goes into the city of Buenos Aires. And it's about 150 bus lines. So this was done, actually the example that you're seeing is just one day of data. So you can sort of make up how much data we were working with. So this is one day of one bus line, think hundreds of lines and tens of days. Next one, please. Thank you. So what we were able to do was mostly go from about well over 100 million GPS records per month that we couldn't really make sense of to just a bit over 1 million identified individual trips. So this is one bus going from start to finish. And I think I forgot to say about the different colors that you're seeing are the different branches of one line. So we were even able to identify trips with that level of detail. I think you're muted, Andres. Thank you, Dani. Okay. Well, given all Dani has just showed, I wanted to close with two slides where I just showed you a little bit about what we learned along the process and some final thoughts or conclusions. First regarding lessons learned, manipulating geographic data can be really hard. In the process of doing this, we tried different alternatives. We moved from one methodology or tool to the other until we came up with the tool we found the best and were able to do it. Regarding the project in general, we found that early lunch and thinking about production analyze the analysis can be also very critical to success. Particularly, because as we said, we are an area that provides data services to other areas inside government. So making data available to users at the proper time is really important. A third lesson we found is that whenever possible, we should focus on improving data quality of the source. In this case, we had to come up with a whole new methodology or workaround to prevent unclean data. And to final messages regarding our presentation, we were able to develop a geo process that was very successful at identifying individual trips and using, of course, post-4G tools. The second one, which is actually very important, we managed to work between different areas of government and as people who come from the public sector can know that, of course, it always has some implications. The third one is that even, or despite data quality issues, we were able to generate information and KPIs to our users from the available source. And actually, we're really happy because the process was a success. And with that, I wanted to thank you all. And of course, thank you, Dani, as well, because of the project we carried together. Yes, thank you, you too, for the presentation and to see what's going on in Buenos Aires. Big applause, please, from the audience to our two speakers. And yeah, it was impressive to see the big data you are handling and how you can solve your problems with post-4G tools and the introductions to GeoMesa and Spark. And let's see whether we have questions. So we have three questions. And the first one is, what tools are you using to visualize your trajectories? Sure, I think I can answer that. Visualizing these data is really hard. And actually, GeoMesa has very good features that allows us to see these data through maybe serving it through GeoServer and things like that. These are things that we did not use. So since we showed you an example of one day, what we did was export the line geometries to a post-Gist database and just visualize them in QGIS for this example. So we're really constructing the line geometries is sort of a thing we need to do in order to calculate the KPIs. But we don't really need to visualize them. We're not doing visual analysis on these data. OK. Yeah, let's go to the next question. And you talked about this incorrect data. And there's a question, where is the incorrect data being filtered out or corrected? Is it by you guys or by Spark or by GeoMesa? You want to take that, Andres? Well, yes, of course. Basically the incorrect data is associated with some manual process that every driver has to input. So that's basically one field inside the table that we can rely on. So in order to prevent, to avoid using that field, we developed this methodology. But also, for instance, if you saw the data, you saw that during the night the GPS records were turned on. So you get a lot of signals inside the terminals. So yes, part of the process was clean, with data filtering with your spatial queries, but also some fields we didn't use because of data quality issues. OK. Good. So there's the next question. How is the data on passengers volume gathered? I think, Andi, you can take that one too, maybe. Well, yes. I don't have a proper amount of rows today, because as you know, in the last two years with pandemic and everything, that value changed a lot. But for you to have an approximate number, when we received the monthly data from passengers, it was around 80 gigs of data every month. So yes, in terms of growth, of course, there were millions in the same order of the GPS data which were 100 million. But yes, 100 million GPS records, it was the same amount in the same order of the data. Yes. OK. I think I can maybe add a little bit to that. And if you were asking about how we counted passengers, it had to do with some auxiliary tables that we were provided that linked points in space and time to transactions on the cards, transportation cards, so we were able to do that as well. OK. Good. Thank you. And that's the last question. And it's about feedback. Did you ever get to feedback information to the bus operator, like this bus equipment is broken? Sadly, no. We weren't able to give feedback to the bus companies. As you can imagine, of course, between the companies and us, there was an area of government and of course, there were like more than 100 companies. So we didn't get across all that in the scope of this project. OK. But maybe it's for the future, something that could happen. Good. So that's all the questions from the audience. Thanks again to you too and all the best for your project and hope the buses and the traffic will be used more and more when the pandemic is over and you get more data, even more data. So thanks a lot and enjoy Phosphor G.
|
The Metropolitan Area of Buenos Aires is the third largest in Latin America, with millions commuters using public transport every day. As government officials we must make sense of the massive amounts of location data generated by our transport system. We used Apache Spark in combination with Geomesa in order to process over 100 million monthly GPS records from buses to create individual trajectories between terminals, infer direction of travel and derive meaningful performance indicators of transportation. We conclude that this is a great combination of tools to tackle big-geo and mobility problems of the kind expected in a big city government. Buenos Aires Metropolitan Area is home to more than 15 million people and the third largest in Latin America. In the last few years, the City Government has dedicated great resources to become an innovation leader in the region. As in most cities across the world, mobility and transportation are one of the most important aspects of life in the city with around 3.4 million daily users before the pandemic and around 2.4 million these days. As government officials, we face the challenge of making sense of the massive amount of location data generated by the public transport system. Achieving this allows us to propose and prioritize new projects, perform program impact assessments, make network and infrastructure changes, and ultimately to improve the life of citizens. In order to better understand mobility, we set ourselves to identify individual trajectories from bus GPS records. Buses are the most widespread mode of transport, and they are used in more than 90% of the total trips in the metropolitan area. We first approached this problem on a sample on a Postgres database, and quickly found that for our need of processing ~100M monthly records Postgis would not be enough. At that point, we turned to Apache Spark and Geomesa, a free and open-source suite of tools that allows geospatial analyses in a distributed fashion. We ingested GPS records into Spark and constructed point geometries from latitude and longitude attributes. We then partitioned the data using the bus line and individual vehicle, and calculated the differences in location and timestamp of each point relative to the previous one. We then applied spatially aware rules to identify the moment when a bus leaves or enters its corresponding terminal’s area of influence, and used them to define the beginning and end of individual bus trips. Using the terminal data, we were also able to infer the direction. Finally we created linestring geometries from all points belonging to the same trip. These trips geometries with its associated attributes were then exported into a spatial database in order to visually communicate and analyse our results. Starting from over 100M monthly GPS records, we identified ~1.5M individual bus trips, completed by more than 130 bus lines. Around 89% of records ended up as part of a trajectory and ~11% were discarded due to bad quality, or not meeting the criteria set to belong to a trajectory. With these individual trajectories we were able to produce key performance indicators of the bus transportation system and to encourage evidence-based decision-making inside the government. In conclusion, we can say that SparkSQL & Geomesa are a great combination of tools to tackle big-geo and mobility problems expected in a big city government. They enabled us to identify millions of bus trajectories, and derive useful KPIs to support data-driven decisions. Authors and Affiliations – Villarroel Torrez, Daniel (1) Armesto Brosio, Andrés (1) (1) Undersecretariat for Evidence-based Public Policy, Buenos Aires City Government. Track – Use cases & applications Topic – Government and Institutions Level – 3 - Medium. Advanced knowledge is recommended. Language of the Presentation – English
|
10.5446/57267 (DOI)
|
Okay, so back in the session here with the third talk, we will meet Alex Orenstein, who is a cowmapper, very special profession and a drought specialist based in Zenegal, and he will talk about Garbal. In the past decade, he was using FOSS very actively and encouraging people to use open data solutions. So with Gabriel, we are looking forward to get to know an open-js tool for livestock herders. So you will discuss creating call centers I heard for herders, and let's see and hear more about the project. So Alex, you're welcome. Thank you Astrid. Hi everybody. So yeah, I'm Alex and I'm going to be talking about Garbal, which is a program that helps livestock herders in Mali and Burkina Faso get information they need on how to move their animals. But just as a small housekeeping thing, I want to apologize in advance. I'm broadcasting from home and I have a very talkative and complaining cat, so he might come by to make some noise and complain. So I just apologize in advance. But yeah, why don't we start? So this project, it runs in the Sahel, and when we talk about the Sahel, we're talking about the area between Zenegal and Chad. One thing to note about this area is that it's prone to very volatile rainfall. You can see on the bottom right a chart that shows the precipitation anomalies from 1950 to 2011. You can see that it's quite common for one year to be drought, another year to be flood, you know, to go from one to the other. So the rainfall patterns have become quite unpredictable as the decades have advanced. And when we talk about pastoralists, who are we talking about? We're talking about livestock herders. These are people who depend on naturally occurring biomass, right? So grass for their animals, which are their livelihood. They don't really, unlike a lot of other people, they don't farm in a fixed place, right? They move their animals with the seasons. It's something that's called transhumance. When they go from one area to the other in search of pasture and water, so they'll start in the north. And then as the year moves on and it gets hotter, they move south into more human areas. And then they go back up north when the rains come. Now you can see here a calendar that shows the seasons. So the Sahel only gets one rainy season a year, July to October, right? And then so for the other eight months out of the year, it's a dry period. And so herders have to move their animals in accordance with those dry periods, right? And moving those animals is a very delicate balance. So on the bottom, you can see an image taken from Central Mali in Mokti. On the left, it's a lake during the wet season, right? You can see it's lush, green, wet. On the right, that is the exact same lake six months later. This wasn't in a drought year. This wasn't in a particularly bad year. This was a regular year, right? So in a regular year, you can go from having, you know, lush, wet area to being quite dry. And this is an image taken from Sentinel-2 that I think really displays that this is actually one of my favorite images, because I really think it shows you the essence of what these seasons look like. It's a lake in northern Senegal. And you can see during the rainy season, the lake expands. And then now we're in February, March, April, and it disappears completely until July, when boom, it comes back. So this is what we're dealing with. We're dealing with an area, a lake, that can be enormous to being completely disappearing in the course of a regular year. So all that shows is that it's very important for herders to know where pasture is and where water is, right? It can mean the difference of life or death. It's incredibly important. So this is where this program, Garball, comes in. Now, there's been a lot of advancement in Earth observation in the past, you know, 30 years, it's become a lot more accessible. Open source technology has made it easier for the spatial community to do this kind of work. But, you know, what a lot of you probably already know is that a lot of these advancements haven't necessarily made it to the people who need it the most yet. So for instance, this data, while accessible to all of us, isn't super accessible to livestock herders, even though they, you know, probably truthfully needed a lot more than many of us do. So this is where this project comes in. It's a call center, and it operates in Mali and Burkina Faso, and it's operated directly by Orange, the telecom operator, but in a consortium with NGOs, local actors, livestock herding cooperatives. And basically it has call center agents that have access to this data who read it off of their screen for a browser interface I'm going to show you, and they provide it to the herders. There were 69,000 calls to the center in Burkina Faso last year. I don't unfortunately have the numbers with me for Mali. The Burkina Faso call center, it serves herders and farmers, but the one in Mali serves herders only. And this project has been going on since 2015, it's been active. So SNV, the Dutch NGO, is sort of the coordinator of the project. A GIS company, also in the Netherlands, Hofsloot Special Solutions, has been building most of the back end. Orange Mali is a telecom operator, and Orange Burkina Faso as well. They are the ones who operate this center. They're the ones who run where the agents are taking the calls and providing this information. A herder's association in Northern Mali, Tassach, is very active in getting a lot of the field data, and also has been working really closely on the design, a lot of the layers that I'll show you. And then the governments of Mali and Burkina Faso as well have been providing a lot of data and a lot of advice actually. So how does it work? Okay, so basically, broadly speaking, you have a herder, right? And he wants to bring his cattle to Tessit, it's a village in Northern Mali. So he wants to know how the pasture is, he wants to know if there is pasture. So he calls the center, right? And the agents from the call center look at vegetation and water data, right, on an interface. They get it from Sentinel-2, from Meteosat, and then they look at field data that's provided by agents on the ground for prices, herd concentration, pasture quality, and then they look at a map, right? This is Northern Mali, this is a biomass map, so pasture situation. And they look at Tessit, and they see a lot of red. And then he says, you know what, the pasture in Tessit, it's not great. So then the herder goes back and he decides where else he's going to go. So the interface, right? It's actually completely open. There's two interfaces, one for Mali and one for Burkina Faso. You can actually go there yourself and look at this data, play with it, look at the interface. And basically, the way it works is that you've got several layers, for biomass, for water, for rainfall. And then it shows on the screen, but also on the left, you have a dialogue box, right, that takes a lot of this data and makes it textual. So it makes the job of the call center agent a lot easier because they don't necessarily have to do a lot of interpretation. They can look at the map, get the general idea, but then they can also look at the dialogue box and get a lot of the information they need directly from text and tables. And then this is the example for Burkina Faso. The two, Mali and Burkina Faso, the interfaces are slightly different. They have different layers. If we have time, maybe I can, I don't know, do a demo and show you. So what does it look like? What's under the hood? So we have two different types of data, field data and remote sensing data. So the remote sensing data, we have a number of different layers. We have biomass. This is collected once a year, actually. And it just shows us the dry matter productivity in kilograms per hectare, right? Like literally how many kilos of pasture are available in an area. And this is the total for the rainy season. And we only do it once a year because we only have one rainy season, right? After the rainy season, no more biomass is being produced. Then we have NDVI, which we use for visual purposes, right? So we have this Sentinel-2 NDVI. And it's really nice. You get the 10-meter resolution. Looks very pretty, but we only do that once a month. As you can imagine, the computing power necessary to create an NDVI image for, you know, well, not necessarily heavy computer power to make the image, but, you know, it's a lot of space. If we were to do this every five days, it would strain the resource. It would be a lot on the resources. And it's also not necessarily needed. What's needed is actually the 10 daily images. And those are one kilometer, and we get those from Meteosat. Now, we use the 10 daily images for the dialogue box that you saw for the text. Why do we use the Meteosat images, which have a lower resolution than the Sentinel-2? It's because it actually fits the user's needs a lot more. So these Meteosat images, we get several of them every day. It's a geostationary satellite. So it provides images a lot faster than Sentinel-2, than your low Earth orbit type of satellite. So we are able to get these images regularly, and we build them into a 10 daily image. So, you know, we get these images that have the least amount of noise, the least amount of cloud cover, and we're able to provide a much more reliable profile of an area, even if it's at a lower resolution. For small water bodies, we also, again similarly, we use a mix of Pro-B-V data. What we used to use Pro-B-V now comes from Sentinel-5 since Pro-B-V has been retired, and Sentinel-2. And the last is cropland, right? So this is actually a land use dataset that we are in the process of implementing. Now it's on the site, but we're in the process of integrating it more into the workflow. It's 100 meters. And what this is, is actually pretty interesting. It singles out cropland, and we're using it to try to help herders avoid areas that are heavily cropped, right? These are the kinds of areas that they want to avoid a lot of times, because if herders will go into agricultural land before the harvest, the animals could eat a lot of the crops, and this, you know, could create a lot of violence between pastoralists and farmers. So it's just something that we're trying to avoid, and we're trying to help herders avoid these areas if they need to. And then what's the data from the field? What are people giving us? Four datasets. One is cattle concentration. Basically, are there a lot of cows? Are there not a lot of cows? We don't actually do a cow counting, because, you know, if you've worked in northern Mali, if you work in the Sahel, you know that asking someone how many cows they have is incredibly sensitive, so we don't do that. It's just basically what people perceive, whether they perceive there's more or less cows than normal. And then market prices. Those are actually quantifiable. That's actually, they'll provide the price point for millet, for rice, for different animals. And then we also have static data that we get from the state on water points, pasture areas, where they are. These are just regular shape files that get uploaded into the server. Now, what's the actual shema, the tools? How do they all come together? So this is a very, very, very dumbed-down version of it, right? There's a lot more steps in here, but I didn't know if it was necessary to go through all of them. If you do want to go through all of them, you can always send me an email. My website's at the bottom. We can talk. I love talking about this, but I figured go with a more simple approach right now. So the first is, okay, the remote sensing data, we get that through an automatic download that is then displayed through a map server, right? Software a lot of you are probably familiar with. And then it also goes into a script client that we developed, right? That basically takes a lot of this pixel or point data and transforms it into text. If a pixel shows an anomaly of 20% more biomass than we had last year, the script will read and say, this area has 20% more biomass than it did last year. So we also have open weather map. An API a lot of you are probably also familiar with, because a lot of the farmers that we work with want to have weather forecasts, right? They want to know whether it's going to remain in the next few days. So we have that and that also feeds directly into the script client. We don't actually serve that as a visual layer. That's just the text. And then field data. So the field data is actually brought onto the Orange Maddie server. We don't store it on our own. But that gets pulled in through a request, which then gets put into both the map visually and the script client. So this is then served on open layers, which is then what the call center agents look at. This is that script that I was telling you about. It's in French, but I provided an English translation on the left. And this is what's really cool is you have, in this paragraph, you've got multiple data sources that are being shown. So for instance, according to our data, the nearest water point is 4.7 kilometers to the southwest. Vegetation conditions are weak in this direction. And again, that is just based on a text or read of the data. Participatory design has been actually super important. All of the layers that you see were designed in consultation with herders and herding cooperatives. Because at the end of the day, it's not the spatial community that's using this product. So for instance, how are biomass anomalies measured? What do we consider a severe deficit? What do we consider a minor deficit? How do we translate this into text that makes sense for herders? So every time we go through a new product, we have to work with herding communities to talk to them about it, to pilot it with them. But then we also have to work with the call center agents. So there's basically two levels of participatory work. One with the herders themselves, and the other with the call center agents. Because while the call center agents aren't consuming the data, they know far better than we do on how that data is actually going to be used and asked for. So limitations and lessons learned. The first is that we cannot orient the herders, right? So the herder can't call and say, where's the pasture that's good? It's not ways or Google maps for cows. There's two reasons for that. One is we can't actually determine whether pasture is accessible or not. We can't do that from remote sensing data, and we can't send that many field agents. And the second is that it depends on customary access rights. Not every herder has the right to access a certain area. Like some people, there are some communities that have negotiated these access over a very, very, very long period of time with the host community. Others have not. So if a herder is calling from a village or a clan or a community that doesn't have the right to access a place, we can't tell them to go there. So that's why they have to ask about the conditions in a location before we can talk about it. And the second is insecurity. I mean, this center operates in a place that's had an ongoing civil conflict since 2011. So this is, you know, a lot of danger can be placed on herders. So you don't want to direct them to a place that has insecurity going on. So in other limitations, we actually can't visualize biomass in the dry season. We can only visualize production, and then you can try to model it for how much will be lost during the dry season. But NDVI images during the dry season can't tell you anything, they're kind of blind. So we have to rely on field data during the dry season. We are still very much dependent on the mobile network, right? Call center agents have to call, sorry, herders have to call the call center to be able to get this information. And this mobile network is very, very, very weak in Northern Mali and a lot of parts of Burkina Faso. So this is unfortunately, you know, it's only we can't really overcome, right? This is, you know, the fact that it's a call center, you can't really go to another medium for that. Also, it's very important to balance spatial and temporal resolution with user needs. Okay, what does that mean? So remember how I talked about the Sentinel-2 NDVI data versus the Mediosat NDVI data. And the Sentinel-2 data might look better to us because it's lower resolution. But the Mediosat data actually ended up working better because we got a lot more images that might give us a much clearer picture. So is higher resolution always better? Is it worth the processing time? Is it worth the noise you're going to get? Not always. And then, you know, looking at the other side, is faster better, right? Like, do you need something every day? Okay, for weather, yeah, sure, you definitely need the fastest data possible for weather. But do you need it for pasture quality? You know, if you have figured out what the quality of pasture, like the types of grass that are growing in an area, you don't need that on update every day. That's not going to change on a day-to-day basis. And the last is it's important to meet users where they are. So one of these is, you know, we had for the longest time in the dialogue box, it was say, you know, this water point is 4.7 kilometers to the southwest. That's not really helpful because, you know, that's not how people navigate. You have to use landmarks. So we've actually been repurposing a lot of it to work on landmarks to saying, you know, in the direction of X Village or in the direction of this known landmark rather than saying the southwest. And then, you know, for weather data, are you going to do absolute measurements? You know, are you going to say 25 millimeters or are you going to give thresholds? And we found that thresholds were what worked best. Light, medium, heavy rain. But those thresholds had to be designed, you know, with users. And you have to find a happy medium because you've got a lot of different users that have different conceptions. So, you know, that was something that we had to consider a lot too. So yeah, that's my 20 minutes. This is me. If you want to reach out to talk about cows and maps, I always love to talk about cows and maps. You can send me an email. You can find me on Twitter at or an underscore essay. Or you can use the interface. We have two available, one for Mali, it's stamp or modem.org. So yeah, thank you very much. If there are questions, I'm happy to take them. And thanks again for your time. Astra, thank you for your unmute. Good. Thanks. Okay. Thanks again, Alex, for the great talk and to see what great service Gabral is offering for the herders. So send a big applause to Alex for the talk. And we have some questions that I would like to ask to you. So one is about the open weather app API. And the question is whether it's pretty accurate for the areas you were working with? Yeah, that's a good question. We had to test a lot. And open weather map was the one that we found. That was the most accurate. I mean, is it completely accurate for the area? No. I mean, we operate in a place that doesn't have a lot of weather stations. So you have to rely pretty heavily on the model. So it's not as accurate as it would be in the US or Europe. But it is out of all the other options we found, the one that was the one that was the most appropriate we found. Okay, good. And then there's another question. And yeah, it is doesn't it takes too much responsibility from the herders, your system? That's an excellent question. I mean, I think that if we were directing the herders, it would, you know, provide a lot of problems. But that's why, you know, the system is designed that the herders ask about the place they want to go. This doesn't direct the herders, this just gives them information on a decision that they were already making previously. And it was designed to, you know, kind of work in the way that normal hurrying decision making happens, right? If they typically herders, when they choose a location, they will call if they have, you know, friends or family there and ask about the conditions, or they might actually send someone, you know, someone in the family, they'll go on a motorcycle to go and scout the area out. So, you know, like, they already use, you know, different methods of trying to figure out what the pasture and water situation is, what we're simply trying to do is just take that process and make it cheaper, easier and faster for them. Okay, okay. So, we have two more questions. The next question is what skills and team members came together to make this a reality? Yeah, that was, I mean, building the consortium took a very, very, yeah, it's been going on for six, seven years now almost, and it's still, it's still working. I mean, it, on the technical side, I would say it took a lot of trying to blend, you know, knowledge of pastoralism and knowledge of how, you know, herding works with not just necessarily GIS, but picking the, you know, most appropriate toolkits, right? You know, there wasn't, you know, we weren't going to have access to a lot of the resources that we would normally give in slow internet connections, given, you know, resource constraints. So, we had to have, you know, that was one of the more important skill sets was okay, like picking the tools that are going to work the best, you know, what kind of setup is going to work the best for a slow internet connection. And, you know, what sort of asynchronous offline collection methods will be used. And then in the wider consortium, I mean, it was a lot, it was, you know, everything from, you know, being able to work with a large telecom provider to be able to get this center up and running to being able to liaise on a pretty much daily basis with herders, potential users. So, yeah, it was a lot of different skills, and I think we're still working on that. That sounds great. Okay, so there's the last question here in the questions pad. And it's about the data, the points of interest. And the question is, how did you collect and store widely recognized landmarks? And I'm curious about whether OpenStreetMap is involved. It is actually, that was the, yeah, that was actually really took a lot of discussion, a lot of trial and error, you know, basically asking people, hey, do you know what this is? Does this make any sense to you? And the first like, few times there's a lot of noes. So basically, we found a bunch of villages from OpenStreetMap, and we ended up using those larger villages. We found, you know, the first one that we found was the main village of each commune. I don't know how to call it in English in French, it's a chef year. So like the largest commune for every village, every larger village for every commune. And, you know, it's gotten to the point where we did have to do some manual work where we were talking to us, okay, does this specific one make sense? No. So we couldn't do this for all of the thousands upon thousands upon thousands of villages. So there was a little bit of manual work trial and error. But yeah, it was basically pulling from OpenStreetMap and finding what people knew. And I guess this data still can grow. So the more the people get involved, the more information you could get about these locations. Absolutely. It's still very much a, you know, a living, growing, growing data set. Yeah, okay, cool. So we are done with all the questions. And it was a great pleasure, Alex, to hear from you and to listen to your talk, all the best for the project and enjoy the conference. Thank you. Bye. Thanks.
|
Livestock herders in Mali and Burkina Faso live under the twin threat of drought and armed conflict. Moving their herds to find pasture and water depends critically on access to reliable information. This talk discusses a call center that uses open Earth Observation imagery and field data to provide herders with information on pasture, water and market conditions. The talk will go over the architecture of the data treatment, demo the interface, talk about successes and failures and show how you can play with the data yourself. Transhumance, or the seasonal movements of livestock herds to find pasture and water, is a centuries-old tradition in Mali and Burkina Faso. The process of selecting routes for movement hinges on a complex network of factors including customary access rights, pasture growth, rainfall, surface water, among others. However, years of climate change and armed conflict have made herding more precarious and prone to rapid changes. As a result, access to data on environmental and market conditions is critical for pastoralists. While satellite imagery has made much of this information readily accessible to the spatial community, few channels exist to transmit this information to herding communities. In 2015, the GARBAL call center was built to provide this data to herders in Mali and Burkina Faso. The call center is powered by an open platform GIS built from remote sensing data on vegetation and water and field data on market prices and animal conditions. Herders calling the center are connected to an agent who uses dashboards to respond to their questions: Is pasture available near me? Is it crowded by other herds? Can I sell my goats for a good price? The call center’s goal is to provide herders with decision-making support in planning their routes. The interface is built on mapserver and uses automated scripts to download and treat Sentinel 2 satellite imagery which then display information on pasture conditions and water availability. Field data is routed through a network of local data collectors who provide weekly updates on livestock conditions and market prices. In addition to an interactive map, the interface provides user-friendly textual outputs that summarize all the layers for any area of interest on the map, which allows call center agents to quickly provide data to callers. This talk will share a number of the lessons learned from the STAMP project and provide a demonstration of the platform (which is openly accessible). Specific topics of discussion will include: The architecture of the system- what worked and what didn’t Maintaining regular field data collection in areas of ongoing active conflict Building and translating GIS data for communities with low literacy Examining the call records to see what data matters the most to users You can use the platform at www.stamp-map.org. For more information, contact Alex Orenstein (info@orensteingis.com / @oren_sa) Authors and Affiliations – Alex Orenstein, DaCarte (Dakar, Senegal) Track – Use cases & applications Topic – FOSS4G implementations in strategic application domains: land management, crisis/disaster response, smart cities, population mapping, climate change, ocean and marine monitoring, etc. Level – 1 - Principiants. No required specific knowledge is needed.
|
10.5446/57269 (DOI)
|
So, whichever of you wants to share, basically what you do is if you share your screen now, one of you or the other should share the screen, then I basically add it to this stream like this and basically in two minutes at 8.31 my time, I will introduce you and then I'll turn myself off and then come back at the end to help field questions. Shall we switch off the cameras and microphones with each other? So first I'll start then I'll pass to Giuseppe or what's the recommendation? What you can do, I think you can, do you have the ability to mute yourself? So yeah, that's what I would do, I'll just disappear and then whoever is going to talk first unmute yourself and you talk and then switch it over to the other guy. Perfect, great, thank you. So I'll give people one minute and I'll let you guys introduce yourself again, I apologize for the mishap, I don't have your paheo handy. No worries, no worries. Okay, good morning, welcome to day two of Phosphor G Puerto Vigwazu Room. I'd like to introduce Liubu Filipov and Giuseppe Raimann, apologies for the bad pronunciation. They will introduce themselves shortly as they enter the presentation and teach us about GeoScan, a spatial data country profile tool. Thank you very much for participating and the stage is yours. Great, thank you Michael. Good morning, good afternoon and good evening to everyone and thank you very much for joining our talk today. My name is Liubu Filipov, I'm a GS consultant in the International Fund for Agriculture Development and today with my colleague Giuseppe Biamonte, we're going to try to show you something hopefully interesting, a project which we have done for EFAT. So what is EFAT first, a little bit of background on the topic, just one second. A little bit of background on information on EFAT, I hope you guys still can see my screen Giuseppe if you can just confirm. No? I don't. Michael, is the screen visible? Sorry. Okay, thank you. So a little bit of background on EFAT, EFAT is a specialized UN agency but also a financial institution which has a very specific mandate to try to support the poorest of the poor, rural farmers in remote agricultural areas. So what we did in EFAT in terms of let's say GS challenge or GS requirements, we shape in the form of this application called Gioscan. First we receive various requests for different geospatial data in a number of different countries, starting from let's say Solomon Islands, moving to Cambodia, India, Pakistan, shifting all the way to Latin America or African region and these data requests usually come from different domain areas, so social, environmental, economic, climate related data and usually of course they are based on a very short time frame to be delivered. So one thing is to produce a number of data sets for Solomon Islands but larger countries like China or India or Brazil are much more challenging in terms of data processing, storage and manipulation. So what we further explore as requirements is very often the data is needed to be provided for offline usage, so usually to be used on the fields or used in the areas with not a good internet coverage or internet connection or shared with third party agency or third party companies outside of EFAT internal network of operations. We also receive different requests from various user types, so GS professionals who wants to export the actual data to do some additional modeling, analysis and queries on the actual GS files or by non tech or non GS users who not necessarily know what is all the technology behind GS and simply need a report, a map to embed in the proper presentation, report or simply share it with a third party agency. And also on a higher level decision makers who are looking for basically to take advantage of the power visualization of the maps itself in a more data driven approach so they can prove their point in various negotiation phases or various decision making phases. So all these different types of users and data are very well structured in the work for process of EFAT starting from strategic country overview, going through a particular project design and the project monitoring and evaluation. All these activities should be aligned with the existing GS infrastructure in EFAT which is entirely free and open source based, built around post-grad port GS, geoserver, geonode open layers. So we try to stick with the good principles of open source. And those additional requirements on our end as GS guys we try to follow a good practice on naming convention, on test staging and production deployment environment, on security by third party independent penetration testing companies, etc. And we also try to do the whole project in a more agile approach. So we started back in 2019 with just a few countries. We prove with different users the overall approach and we come back on 2020 on a much more bigger scale covering the entire region of EFAT called Westman Central Africa. And now this year we are doing sort of a third version of the application which is actually now having a global coverage and global scale. And hopefully we're going to expose this including outside EFAT network and basically public usage to outside world. This is a snapshot or description of the whole process we did in a nutshell. So these various socioeconomic environmental climate and a number of other domain areas we needed to cover with the proper literature and data review in the beginning. So we benchmark a number of data sources, majority of which you see some of this on the screen, approximately 28 different data providers. And out of these 28 different data providers we benchmark more than 180 geospatial layers and we shape all this in a proper documentation and metadata description. And then we move to structure various data sets into a common naming convention which is covering basically our internal EFAT needs. We shifted then as a next step to processing various conversions, structuring, renaming, analytical statistical calculations which Giuseppe will touch in a bit. And the output products were targeting first a nice GS package with all this 180 layers of information with nice styles, nice visualization, nice QGS project all up together for one particular country. So the users basically click and grab all the data they need and use it offline or share it with a third party local vendor, local provider, agency, etc. The next output is in a sort of automated reports in a PDF format containing a number of maps, diagrams and additional visuals which are ready to use for the variant user to grab, explore and take it, let's say offline. In order to align with the EFAT infrastructure, we placed the whole application in EFAT Enterprise GSM system which is based on GeoNode. So all the applications, all the layers and web applications are exposed to the GeoNode and allowing the user to search by country, by topic, by different area and to go to the particular data or country of interest they need. And then at the very, very end we focus our, let's say, not so good design skills because we are like map orientated guys but we try to develop interactive user friendly high level dashboard with selected set of indicators and with high level statistics on a region level, on a country level to be more engaging for a higher level decision making audience. This is a beautiful and I guess very difficult to read snapshot of our data model but basically we use this very often to engage with the end user. It's not a, let's say, a UML diagram or the data model but it's based on the ISO team classification and we engage with the user explaining the different layers of information structured in the different data teams and this has been proved very, very useful to try to explain all the complicated content which we have behind the scene in the various different levels of different applications. These are the nice logos of all the data providers which we are using. Again the emphasis here is that we do have requirements to provide offline or off the grid information so all this is basically usually grab downloaded, pre-processed package and then deliver to the user in these various application streams which I mentioned. The first application stream is this if at Junot portal which I mentioned allowing the user to search by country, by topic or just zoom to a particular area and get all the data which is available underneath the particular zoom level. And of course they have full access to all layers. They can search, they can export, they can even create their own web map applications and share it with other users or departments. This is how our dashboard application looks like and we aim to be exposing this towards the end of the year. For now it's available only for internal needs with some pending penetration security testing to be done in order to be exposed to outside. It's very standard way to present a geospatial web application. We do have the table on contents on the left. What is powerful and what we find very useful is exactly engaging in higher level decision making. For example this simple snapshot is displaying the temperature increase in the future scenario in 2061 for the region of Western Central Africa displaying the areas above 30 degrees Celsius, which are getting bigger and bigger in the future scenarios and also overlapping with current economic activities or GDP by districts. Basically showing that areas of greater climate change impact are going to affect the poor regions of these countries which is of particular interest of the international. Then I'll pass the fort to Giuseppe to share a little bit some funny or challenging task which we find on our way to developing the whole process. Thank you. Thanks Leobu. This is a very simplified scheme on how we treat the data. After the data search, the other review, data gathering phase, we end up with a lot of data, some of which is global, some of which is just local, so for some countries or regions bought in vector and raster format. So what we do is we clip them so that we obtain data for the specific country if the source is global or we verify and paste on the data if it's local. We have also created the styles for proper visualization for all of these data sets that are packaged all together, keeping a very precise and defined data structure and the precise data naming convention that fits the needs of the agency. On top of this, of course, we use this quantity of data sets to calculate statistics. We have two geographical references. One is within administrative boundaries, let's think district level. Another is an hexagon grid that has been designed in-house and it has global coverage. So in the end, we obtain a lot of statistics but by districts and more granular on this hexagon grid. And also we make use of this amount of data that we have to calculate additional statistics doing just math calculations on the tables. So click please, Lyubov. So basically we have this country data, the statistics and the visualization for every single one of them in this nice, well-organized GIS data package. On that, we also create some automatic reports that now cover over 150 countries that go, yes, from Afghanistan to Zimbabwe. So all of the countries in which the agency works. And it's not just maps but also we have nice graphs and statistics. I mean, a lot of information. And a very interesting use case as Lyubov was mentioning is that first of all, we have some users that are not necessarily GIS savvy so they can use a pre-packaged product that contains all the information that they need. And also we have a use case which is offline usage. So our people on the ground that maybe they don't have access to internet working in very remote places, they actually have some form of access to information because they have a PDF or they can print it out so they can carry all that amount of information with them. It was a long journey and yes, a long coding journey which started as well as an innovation challenge. So it started with a very humble script which is all Python. And well then it evolved over time with the needs, with the quantity of data that we had. And in its final form, it is a nice QGIS plugin that makes everything quite simple. Also you can barely see it probably because it's a bit small but it's parallel data processing, the first box, parallel statistical computation. And that's because at some point managing all of this quantity of datasets actually started to be a computational burden and so to take a lot of time. So we implemented parallel processing and actually this allowed us to process all of the data in a much shorter amount of time and also to scale up our operations to global coverage without having to wait days to produce the data because sometimes these are time sensitive operations. So we need to actually have a very fast turnaround time to produce this data and give them to our colleagues that need to use them. This is the preliminary benchmark of the impact of parallel processing which is between the first iteration of GeoScan and the second one but then after that we improved on it. So we're saving even more time now. So it was an interesting journey. We had a lot of fun and it was not a journey without challenges but we ended up to do some interesting things. One of our activities was to play this game within the theme which is called the foreign language or encoding error. And you're all invited to play that with us. So sometimes you end up with a table that has characters that look like that. And so you question yourself and you start to be aware of your knowledge gaps and you ask is this a foreign language or an encoding error. I don't know what your guess is but this is actually Armenian. This was an easy one. These other ones show stuff like this showed up in a table after the Armenian one. We were questioning it. So again, foreign language, encoding error. This time it was an encoding error. And this was probably the most interesting case because yes, we were actually thinking it was an encoding error and actually we just became aware of our ignorance discovering that it's a proper language, a proper script and so that actually our dataset was absolutely correct. We know sometimes the game was where is the border. And again, this was challenging and interesting. Although this is not as fun because what we are showing here in this very streamlined map is several disputed borders or disputed areas around the world. And actually this was challenging on one end to actually get to define where the border was and also extremely interesting because these are not just maps but these areas indicate also places of conflict, places where people have been displaced. So each and every case was actually an opportunity for the whole team to have a better understanding on what happens in parts of the world that sometimes are very remote and not under the spotlight. Sometimes the issue was unexpected and hilarious, sometimes also frustrating. So one of the questions that we have to ask ourselves is where is Fiji? And well, yes, you would say in the Pacific but actually a map could just look like this which is not ideal for Visology Edson purposes but you know, you are all GIS people here so you know the answer to this which is well, reprojection, isn't it? So next slide please. Yes, reprojection, right? Yeah, except reprojection doesn't always work as intended. So these are the big guys that know how to do these things. So we see left Google and you can see that there is a little data gap there and even in Bing maps there is a data gap. So sometimes the solution seems straightforward but it isn't. But we take this to the next level and this is the issue that we had to face. So we had this mirroring issue that was a bug. But in the end, and that's my last slide, I have to report that we managed to do what we were supposed to do and that Fiji is all in one place. I thank you and I leave the floor to Lyubov for his final thoughts. Thank you, Giuseppe and sorry, Michael, for going a little bit over the line. Just on behalf of the whole team, thank you for joining the talk and happy to take any questions on you guys. Over to you, Michael. Thank you. Very nice job, guys. Thank you very much for the talk. Guys, definitely generated some interest and some questions. I think we have about three minutes or so to go through those questions. So I'll start handing them off to you quickly. First question is, someone else has the same task of collecting and betting broad swaths of data across many countries. Do you have any published work with more detail about how you went about collecting data or your findings regarding the comparison between different data sources? Excellent question. We do have the final documentation, which basically is describing KINDITO, the final selection product. Internal benchmarking was done within the team members, let's say evaluating different data sources. What is the better one? And there is never a straight answer to that, even within the four one team and for one particular country. So we tailor this based on the if and needs and if and requirements, in some cases with discussion with other colleagues with the respective professional field. Great. The second question is, how much time does it typically take to update an existing data layer or add a new data layer for the end to end processing? This was actually the triggering of the application, approximate time or duration for compiling data for one country was two weeks time and we managed to narrow down to approximately two hours of work. And growing more and more data layers, of course, brings more and more sophisticated time processing, but usually this is done in a matter of hours. And the last question, what underlying tools do you use for the Zonal statistics on the hex grid? I'll pass this to Giuseppe as he is the statistics guru because we had quite a lot of discussion and challenges there. Giuseppe, over to you. Yes, actually, there were several options for the Zonal statistics, but in the end to make it easier for our users, we tried to make it very straightforward, QGIS workflow. So the Zonal statistics that are used are QGIS statistics in the current version. Great. Well, thank you guys very much. Very interesting and important initiative. You have the team's email there if you want to try and get that documentation or engage with the team directly. So thank you very much. I'm going to switch, start switching rooms saying goodbye to Liobo and Giuseppe and getting Timor up and running. Thank you very much again. Thank you, Michael. Bye Bye...
|
For international development agencies, timely and accurate geospatial data is an essential tool for evidence-based decision-making. Yet gathering, analyzing, and presenting geospatial data is a complex and time-consuming activity that requires a specialized skill-set. This presentation shares the experience of an Innovation Challenge project implemented at the International Fund for Agricultural Development (IFAD) - a specialized UN agency and financial institution. The main goal was to minimize the time and knowledge required to gather and process a vast array of relevant geospatial layers, providing a standardized approach applicable to every country of operation in IFAD’s activities. This objective was achieved via the implementation of automation procedures using open source tools, which resulted in a reduction of the required processing time by a factor of 40 (from 2 weeks to 2 hours). GeoScan is based entirely on an open-source technological stack and uses the latest, verified data sources, providing various levels of users with different information products: automated pdf atlases, ready-to-use GIS data, metadata and web services, web applications, and an interactive user dashboard. The project included the following activities: Data needs evaluation, relevant to the international development sector and aligned with IFAD’s strategy in the agricultural environment in rural areas. Literature and data review to match the identified data needs. Data selection, validation, and detailed documentation on the selected geospatial layers. Data standardization and ontology with automated processing for data structuring, visualization, and statistics calculation. Preparation and generation of automated country reports and structuring the data in GIS data packages. Integration with the enterprise GIS infrastructure in IFAD. Development of interactive web GIS application and dashboard. The project made extensive use of QGIS, GDAL, Postgre/PostGIS, Geonode, Geoserver, and OpenLayers. The GeoScan application aims at standardizing the data collection and analysis workflow, providing a range of end products that have multiple uses in the international development environment. After an extensive data review, over 180 datasets from 28 data providers have been downloaded for local processing. The data providers include various organizations such as NASA, European Space Agency, European Commission Joint Research Center, FAO, as well as Open StreetMap, Google Earth Engine, and many others, depending on the various data theme needs. Following ISO19115 core theme requirements and ISO country codes, we have created a standardized naming convention for all data, enforced through an automated process. Data processing includes renaming, restructuring, clipping to the region of interest, statistics calculation, standardized visualization. Automation has been implemented in Python, leveraging the QGIS environment as needed, but an experimental dedicated QGIS plugin has been developed as well. To produce summary documents, we have used QGIS Atlas to generate automated pdf reports for all 37 countries of interest, each including about 175 maps. For offline usage and further processing, all data has been structured with proper metadata description, styles, and QGIS project for direct utilization by GIS professionals. The data and reports have been uploaded online to make them available through IFAD’s Geonode platform, which provides them in multiple forms: web service, web application, pdf report, and actual GIS data. A selected set of indicators is available through a custom-developed web GIS dashboard for the region of West and Central Africa. Authors and Affiliations – Lyubomir Filipov, GIS Consultant at the International Fund for Agricultural Development. Giuseppe Baiamonte, GIS Consultant at the International Fund for Agricultural Development. Track – Use cases & applications Topic – Data visualization: spatial analysis, manipulation and visualization Level – 1 - Principiants. No required specific knowledge is needed. Language of the Presentation – English
|
10.5446/57270 (DOI)
|
Hi everybody. It's a pleasure to me to introduce Rajesh Shinde. He is the PhD at the University of Bombay and he is Jisak Konferes. Sorry I have to remove the audio from Vanuress, otherwise you listen to that. So it's a pleasure to me to be here because I was a student a couple of years ago, like ten years ago and it's really beautiful to see a lot of young students working and improving the OSGEO software. So Rajesh, it's everything for you. Thank you so much for the introduction. Very good afternoon. Good morning. Good evening to all. And it's very motivating to see an ex-ZSoC student as the session leader. I look forward to my presentation and hope that you all will like this. So this is a brief presentation by us, Google Summer of Code Administration's team for 2021 regarding Google Summer of Code with the open source US special foundation in this special event of Google Summer of Code in the force 4G 2021. So a brief intro of ourselves. We are the Google Summer of Code OSGEO Administration team for 2021. I am the presenter and my name is Rajesh Shinde. And another administrator of our OSGEO ZSoC team is Rahul Johan. So a brief, I quickly introduce myself. So I humbly participated in Google Summer of Code 2017 as a student. And then I went on to become a mentor for the MapMent organization till 2018 to 2021. This year also I was a mentor. Then in 2020, I got an opportunity to participate and contribute as organization admin. Then in between, I also participated as Google Code in mentor. And currently I am also an OSGEO charter member since 2018 and the project sharing committee member for Zoo project. So apart from me, Rahul's journey with ZSoC also started back in 2017. So he has done ZSoC twice with OSGEO as a student. And then this year he participated as a mentor for the PGRouting community. His ZSoC as a student was in ISTSOS community. Then from 2020 onwards, we both are contributing as our admin. He was also administrator and mentor for Google Code in and he has currently joined the league as OSGEO charter member since 2021. Coming to Google Summer of Code, this talk, this session is about Google Summer of Code and the initiative of OSGEO with Google Summer of Code. So Google Summer of Code, famously known as ZSoC is an online time bound collaboration between a student that is an independent developer, an open source project, which is a participating organization and Google, which generously funds the project. So this is a small snippet, which we have taken from the official Google Summer of Code marketing flyer. So as you can see from the tagline, it says that help us change the world, one line of code at a time. So over the last 16 years, Google Summer of Code has been participating, has been giving the new students an opportunity to contribute in terms of code to open source organizations. And this has helped a lot to the participating organization and also the students. So we'll come to that in the coming slides. Now to get the full details, you can visit the website as given on the official flyer g.co slash GSOC. Now the question arises, what is the role that OSGEO plays in Google Summer of Code? So as I briefly mentioned, OSGEO is an umbrella organization and OSGEO has been participating since the second year of inception of Google Summer of Code, which is a very, very big achievement because Google Summer of Code actually started in 2005. So 2021 was 16th year of GSOC and OSGEO has been participating in since 2006. So this was 15th year of OSGEO participating in GSOC as an umbrella organization. So I find myself very much obliged and very much proud to be a part of organization administrator team and also regarding the journey which I have spent with Google Summer of Code working with OSGEO because it has personally, it has given me a lot of things which you had seen in my introduction. So OSGEO is an umbrella organization which means that several different projects or several individual projects participate under the umbrella name of OSGEO and OSGEO then applies for Google Summer of Code as an individual organization. Now these projects include OSGEO projects, the community projects and then there are guest projects like Mapment for ME which comes as a guest and apply as an OSGEO organization for the Google Summer of Code. Now coming back to the participation process, so with respect to students, the participation process it usually starts around September, October of an year. So since the GSOC 2021 has just ended, so the commencement of the next year's GSOC would be very soon and the announcement would be there. So as soon as we get to know the announcement by Google open source team, we send the announcements to Summer of Code mailing list and discuss mailing list. So if you are a student looking forward to the opportunity for Google Summer of Code 2022, then you should definitely subscribe to SOC and discuss mailing list and look out for our announcements. Then the next step is to identify the projects of your interest. So you can draft an introductory email that my name is this, this, this, I work here, I am a student in this year, I am interested in this project and I am very much looking forward to contribute to your project. It will be good for my career as well. And after drafting this introductory email, you can send this introductory email to the projects which you look forward to apply and work with. Now many of the times a question with my experience which I have got from a lot of students is, I don't know much of programming. Should I apply or not? Now to answer this question, in the last year's Mentor Summit, we raised this question to many of the Google open source volunteers and team members as well. And the response we got from them was, many of the organizations are not hardcore programming organizations or they are not hardcore developer based organizations, but their focus is more towards creating something which is open source in nature and useful to the broader scientific community. So even if you are interested and you are ready to learn to code, then you should participate and send the introductory email to the projects. Definitely you will have to do some kind of coding and some programming, but you can learn it over the time and the mentors are there. These mentors are legendary in their own field and they will definitely help you. So if you are a student, just don't resist from sending the emails, find the projects which you look forward as interesting and go and connect with them. In terms of projects, the participation process is that the announcements would be there again on the discuss mailing list and everywhere. So if you feel that you want to float some topics for that particular GSoc, so you can get in touch with us, the OSU-GSoc admins at gsocadmin.osu.org and we will help you with the further process. So this timeline would mostly start after the announcement of Google Summer of Code for that particular year. Now coming to a very important question that since we have been participating as an organization for the last 15 years, so how does GSoc really help? So there have been a lot of discussion going around and in the organization admin team also we have been discussing this that does it really help the students or when we became mentors. It is a lot of work for the mentors. So this year I particularly mentor for MapMint organization as well along with Gerard Fenoy and other mentors for the MapMint team. So particularly we had five students this year and mentoring five students at a time is a difficult task. So in that case the burden is too much. So to address this, this particular slide we have made. So GSoc, it really helps students to give them a chance to work in a global open source geospatial community. This might sound very trivial, but yes it is important for the student who is just trying to graduate with a degree or come out of a university where there are hundreds and hundreds of fellow competitors, but then there is a chance to stand out against all of them just because they have worked in a global open source geospatial community. So it helps the students also to contribute to an existing code base which would be useful to the users. And this I have seen, the third point I have seen from my personal experience. So when I sat in an interview, I did not join that recruiter base, recruiter, but when I sat in my interview, so my interview was cut short to only 10 minutes just because my Google Summer of Code contributions were seen. And this has been done over the years for some other students as well. I will present the official statistics also in the upcoming slides. But the core point is that employers trust open source contributions and they look forward in students who have produced time bound deliverables and who have worked in a global community who have experience of communicating with a lot of developers over the globe. And moreover, it also assists them financially. So the amount varies over the period, but yeah, it definitely assists them financially. Now with respect to projects, yes, it gives them a good chance to groom younger generation into the community along with a feature which the individual developer works on. So this feature, they get it in a time bound manner in a three months period. So it is a very win-win situation for students and projects that students are getting something new feature in their software and that could be a part of their next release. So and definitely there is an opportunity to focus on modular funded objectives. So what this means is a bigger goal can be divided into smaller, smaller objectives and then they can be supervised to be completed in a particular amount of time with getting funded from Google open source community. So this in all creates a huge advantage and also creates a greater impact for the community projects and the guest projects to join and participate Google Summer of Code under OSCE umbrella. The reason being they get a new set of developers in their community so that the work also gets divided and then the progress is very rapid, progress is very frequent, progress is very supervised because you just have to supervise the developers and then it is also funded by some third party which is Google. So yes, as I mentioned about the official statistics, so this statistics we have taken from Google open source block which came after the results were announced for GSoc 2021. So in this statistics you can see that there are a lot of big numbers. So 99% of students plan to continue working on open source, 94% of students will continue working with their GSoc organization. But for me the most important part is this one which says that 36% of students said GSoc has already helped them get a job or internship and this 36% might be very less as compared to other numbers but in total 36% is very big for a student community to directly get a job or internship just on a basis of three months experience. So this is a huge impact which we as participating organizations trying to make and thanks to OSU community members and projects for coming forward to do this. Now I would take this opportunity to introduce our OSU Google Summer of Code 2021 champions. So we had 12 individual developers from all over the world participating as Google Summer of Code students for OSU. For QGIS we had Francisco, then Ashish and Vinit for PG Routing and I would like to mention that Ashish contributed to PG Routing project for the second time this year. Then we had Aaron, Caitlin and Linda for Grass GIS project and Linda again participated for the second time for Grass project under OSU. Then we had five projects for MapMint and MapMint for ME which is the Android version of MapMint project. So we had Aryan, Aniket, Fatehi, Sandeep and Saurav working for MapMint and we had Han working for Post-GIS who work on a sorting algorithm. Now this was not at all possible without the contribution of the amazing OSCO GSOC mentors. So we had 18 mentors this time for QGIS, Post-GIS, Grass, MapMint and PG Routing and as admins and from on behalf of entire OSU community I would like to thank all of them for the great job they did over the summer in supervising these students because it is not an easy job to supervise someone who has just joined in your community. As I have experienced it being a student and being a mentor when I was a student I was just about oh I am not getting to understand this help, help, help and then when you become a mentor you see so many help questions and it is very difficult to decide how to help. So yeah you have done a great job and thank you for all your contributions. Now special acknowledgments also goes to the Fast4G organizing team because from this year this has been the first time when Google Summer of Code students are being given an opportunity to present their work in the special Fast4G session. So big thanks to Fast4G organizing team, big thanks to Google open source team for constantly organizing the Google Summer of Code over the years. Big thanks to OSU GSOC administration team not us the prior ones. They have set the platform very well for us so that we could very seamlessly get on board and start working. OSU GSOC mentors over the 16 years, over the 15 years. OSU GSOC contributors the alumni of Google Summer of Code, the ex contributors, ex developers who are still there and helping some of the students with their code base and the entire OSU community. So thank you to all of you for being a part of it. Now coming to the events and plans which are there in the upcoming period and some of the events have just concluded. So the upcoming event is GSOC Mentor Summit. Now this year as well because of the pandemic the Mentor Summit would be online. It is scheduled on Friday, November 5th and for the first time this year Google open source also introduced first GSOC student summit. It was held online on August 27th. So yeah and coming back to the plans so as a part of OSU GSOC admin team we are planning to prepare a draft for the GSOC administrators team manual to ease the transition of admins. So a lot of steps are required from an admin perspective to announce about the program to get in touch with mentors to get in touch with students. So and this all there is a possibility that there is this possibility that something might be missed in between. So we are planning to prepare a draft of all the events over the timeline so that it is it becomes very easy for the upcoming organization organization admins to be a part of the admin team. Yes, so now coming to the point of is there anything new in 2021? No, I'm not talking in terms of COVID. I'm talking in terms of GSOC. So yes, so first thing is that GSOC students are getting free speaker tickets to present their work in special force for the 2021 session and all these presentations are lined up just after this presentation. So I know you all are waiting for me to compile so I'll do that very soon. And with respect to projects so yes so one thing which the OSU would have approved recently is to allow the mentors type and which is given by Google to be used by projects for their requirements. And I think this is also a very great initiative and thanks to the board for this approval because mentors they do spend a lot of time in working on this and the project mentors it will be up to them to use this type and based on the project requirements. So very soon the further steps will be taken in this regard for Google Summer of Code 2021 as well. So yes with this I thank you all for being a part of this presentation and thanks to the force for G team again for giving us an opportunity to present this report. And if there are any questions please reach us at Google Summer of Code admin at OSU.org and you can connect with us offline also even to Rahul or me. So I thank you all for your patience. Rahul, thanks a lot for your introduction to GSOC. It was really wonderful to see all the numbers and also know that starting from student you can grow, become a mentor and admin and all the...
|
OSGeo's Google Summer of Code Initiative has been an inspiring and motivating platform for new student developers to join the OSGeo projects, community projects, guest projects, and incubating projects. In 2021, OSGeo is participating for the 15th year in the Google Summer of Code, and it itself is a great achievement. With this talk, the OSGeo GSoC Administrators shall try to put forth the importance of GSoC with respect to the students and participating projects. The admins would focus on the development of projects with GSoC and encourage projects to be a part of the upcoming GSoC. Over the years, OSGeo's Google Summer of Code initiative has transformed into an initiative full of contributions towards geospatial software development. In the last 15 years, many OSGeo projects comprising incubating projects, community projects, and guest projects have progressed attributed to the contributions of student developers. Some of these students continued to participate as contributors for the projects and went on to take mentoring and organizing responsibilities. This is a true sense of FOSS4G in terms of individual and collective growth of the student developers and the OSGeo community. In this talk, the OSGeo GSoC Admins team would try to appreciate the efforts of all the mentors and students involved till now and present the state of the GSoC 2021. The Admins would also present possibilities for new projects to be part of the GSoC with OSGeo as an umbrella organization. Authors and Affiliations – Rajat Shinde (1), Rahul Chauhan (2), Indian Institute of Technology Bombay, India (1) Track – Community / OSGeo Topic – Community & participatory FOSS4G Level – 1 - Principiants. No required specific knowledge is needed. Language of the Presentation – English
|
10.5446/57271 (DOI)
|
Thank you for the next session in the Ushaidi Ushaida room and I welcome Till Adams from Muntialis and Henrich Paulsen who will tell us about Hermosa. Henrich and Till are well known in the Ustio community and they are the co-founder from Terespris and Muntialis and we are hearing now from Henrich about the Hermosa project. Good morning, good afternoon, good evening ladies and gentlemen. Welcome to the phosphogee in Argentina. My name is Henrich Paulsen and I will present this talk together with my business partner Till Adams who won't be speaking today and I will be talking to you about Hermosa, the holistic ecosystem restoration monitoring reporting sharing marketplace application which supports the UN decade on ecosystem restoration utilizing geo and earth observation technologies. Just to give you a quick introduction who we are, we are from the company Muntialis, we founded it together. It is accessible through the REST API that grass G.S. has available and we use it for the processing of the earth observation data that we are using in the project. Now the problem is quite simple, the world is burning literally. Many of you know the examples from Australia, from the United States and other parts of the world and the climate is changing, there is either too little water in places or there is too much. This is a picture that was taken recently here in July about 30 kilometers away from where I am speaking now and the damages are incredible. Also because of the changing climate, the damages caused by hurricanes and strong winds are incredible and humankind is engaged in unsustainable practices destroying ecosystems. Now about two years ago on the first of March 2019, the United Nations declared 2021 to 2030 the decade on ecosystem restoration. So one of the important things is that ecosystems are being restored in such a fashion that we try and mitigate the effects of climate change. And the German contribution to either by the German space agency, the LR, Muntialis was awarded a contract for a two-year project to help with earth observation data and geo-processing to quantify the effects of the restoration processes. Now Hermosa I already mentioned is a web-based application and it is easily accessible through httpshermosa.earth. Anybody can register there and the important thing is that we tried to identify the tasks that everybody is doing when it comes to ecosystem restoration. So the first component of this application is what we call the identify and connect module. Then there is the organize and implement module where we utilize geographical information systems to support the restoration process. Then there is the third module called monitor and report which utilizes earth observation data in various forms to provide the transparency and the verification of action on the ground. And last but not least there is our learn and share module where you can talk about success stories and communicate about your projects. So the identify and connect module is where the different stakeholders which can be the NGOs, planting organizations, local communities, universities, bankers, all sorts of stakeholders where they identify each other and connect to set up a project. And for this reason we have a web-based mapping application where everybody is able to create their own projects after they have registered. The projects then show up on the map as indicated by the red arrows. And there is a search functionality so that you can look for geographical regions, you can look for names, you can look for all sorts of things that you can enter into the search bar. Then there is this button to create projects yourself. And once you have done that you are able to enter the data of the project so that it can then be found by the other stakeholders who might be interested in collaborating with you. As soon as you have done that you can upload pictures, videos and other media to support the description of your project. And then as a third step you can start inviting people to join your project because you obviously are also in the position to search the platform and look for collaborators that you are missing in your consortium. Now once the consortium has formed we have the second module, Organize and Implement which will support the implementation process and is quite easily done because you can upload your own geodata to identify and indicate where the project is located. You can obviously manipulate, you can change the geodata that you have uploaded through the use of Geospiler in this case and make it easily available for others. Now Hamasa also comes with a bunch of datasets, global datasets mostly but they can also be local. For example a digital elevation model is available for the whole world. We have the land use and land cover dataset in the application and we have soil types just to give you a few examples of what is available in the system and they can be used for the purposes that the project needs. Now once the project has been implemented on the ground everybody, the stakeholders, the consortium they want to know how good the project is doing. So for this we have the third module which we call Monitor and Report and which uses the latest in Earth Observation data and Earth Observation processing to provide the transparency and the verification of what is happening on the ground. So one element that is in here that you can create your own training data for Earth Observation data classification. So you might go into QJS and because you have knowledge on the ground you create your own training data which you can then upload to the system and once the data has been uploaded to the system you just identify the region that you would like to classify and in the next step you would define two points in time. So you would classify in this example July of 2019 with this training data and in a second step you would take July of 2020 to classify the data from this year and after you have classified both data sets a result would look like shown in this slide and then obviously because you have the classification from two points in time you can do what we call change detection or what is known as change detection and then you can detect the change and you can see what is going on on the ground. Now the example here was Sentinel 2 we also have Landsat already in the system we're still working on integrating it fully and improving the system but Landsat is there and Sentinel 1 is also available and as you can see in this example the classification of this radar data is quite difficult to interpret so we have included a mechanism to make it more easily interpretable so in this instance you will see where most of the change has taken place and to support the user in the interpretation of this data. The next step is that the high resolution data is sometimes not detailed enough so we have an API in the in Hemosa which grants access to very high resolution data so this is commercial data with a resolution that goes down to 0.5 meters and as you can see in the image I'm displaying here if you zoom in to Sentinel 2 and I'm displaying Sentinel 2 data with a spatial resolution of 10 meters you see that it's quite blurred and that you can't really see much and can't identify really what you're seeing there in this image so you can easily use the mechanism that we were providing in Hemosa you open the bounding box and you request very high resolution data from appropriate commercial providers and the result looks like this so from the comfort of your office from your desktop you can easily access also the remote areas which are under normal circumstances very difficult to get to a lot of people in the field are using drones but they come with a problem that you actually have to send a team there to fly the drone to manage the data most of the people are fighting with large data bolusums and so on and all of this can be circumvented here in the Hemosa platform because we have this very comfortable API. Now the fourth module which we call learn and share we have implemented it for the simple reason that most projects are quite similar in nature so if you're trying to restore an area you want to plant trees then other people who also want to plant trees they have very similar problems so it makes a lot of sense to share the experience and the best practice that you have experienced during your project and that you can easily share this with other projects and other people so we have the possibility to create what we call blog posts so you can write documents you can upload media you can write little how-tos you can provide PDFs all sorts of documents for the benefit of other people and as you can see here in this image pictures are there you're not limited to the kind of topics that you can address it can be restoration it can be data management it can be the creation of business plans it can be just about anything that you can think of and they are connected to the projects so you will find appropriate comments in the geographical regions that you are interested in and the current status of the Hermosa project is quite simple it is ready to use and as I mentioned earlier and you will find it also in the slides you can access the platform through Hermosa.erf you can register and you can use it but we're obviously still improving functionality and usability over time and Hermosa is based on re- and open source software and international standards and we aim this project to be open and inclusive so if you are interested in collaborating on the technology and putting it to use in your region in your area you're most welcome to contact us and for more information you can also visit Hermosa.munyales.de to gain more information and to get in contact with us and with this I close and thank you very much for your attention and would love to answer your questions if you have any thank you very much. Yes thanks a lot to Henry for the great presentation and introduction to Hermosa it was very impressive to see this program and to see the global use which it offers so till you are here now for questions and we have a question in the chat so the question is what kind of commercial data is available within Hermosa? That's quite easy question to answer we implemented API connection to up 42 data so this is a data that Henry spoke about when we spoke about the high resolution data. And is there a plan to add more commercial data? Probably the point is the technique is not the problem so proprietary data provider normally offer an API where you can get the data from. The point is in the moment we are in the stage of getting the platform into a kind of business model and get it really in use and putting in more effort and more interfaces to other proprietary data is of course also a matter of success of the platform. So that's the point and that's by the way really an interesting experience for us at Mondialis because normally we are our business more based on a project based and not running a platform like this so that's kind of interesting how this will perform. We're going to see. So it's like a new approach? Yeah it's like kind of a new approach, new idea as Henry said in the talk we had the luck that ESA supported part-wise, you must say part-wise the development of the whole platform. Uh-huh.
|
HERMOSA: Supporting the UN decade on ecosystem restoration utilizing geo- and earth observation technologies The United Nations declared 2021 to 2030 as Decade on Ecosystem Restoration [Verlinken: https://www.decadeonrestoration.org/ ] in the hope of being able to avert the worst effects and limit the heating of the planet to 1.5 °C in comparison to pre-industrial times. The companies mundialis and terrestris from Bonn, Germany are developing a digital, internet based platform supporting urgently needed ecosystem restoration efforts by utilizing geo- and earth observation technologies. The project is financed by the European Space Agency (ESA). The platform is called HERMOSA, an acronym for Holistic Ecosystem Restoration Monitoring, repOrting, Sharing and mArketplace. From a technical point of view, the platform makes use of SHOGun, an Open Source WebGIS framework that uses react-geo and OpenLayers on the clientside, and GeoServer, actinia and GRASS GIS on the server-side to name just a few. The platform helps registered users to analyze the efforts that organizations face on the ground when restoring ecosystems with a web-based geographical information system. Beside the WebGIS there are modules for on-demand and automatic analysis of Sentinel1 and 2 data, but also the use of very high resoluted (VHR) satellite images is possible. The analysis tools offer a change detection or a land cover classification for example. The main challenges we had to cope with is to deliver a user-friendly tool, that allows users to easily perform complex analysis and to support them in interpreting the results. We'll have a look at some of the decisions that were made in that respect. The talk will on the one hand focus on the technical base of the platform but we will also show some of the functionality from the users and from the developers perspective. I will also dedicate some time to the challenges arising when releasing such a platform under an Open Source software-license. Authors and Affiliations – Adams, Till (1) Paulsen, Hinrich (2) terrestris GmbH & Co KG (1) mundialis GmbH & Co KG (2) Germany Track – Use cases & applications Topic – FOSS4G implementations in strategic application domains: land management, crisis/disaster response, smart cities, population mapping, climate change, ocean and marine monitoring, etc. Level – 2 - Basic. General basic knowledge is required. Language of the Presentation – English
|
10.5446/57272 (DOI)
|
Hi, hi everyone. So, hi, so Mark's presentation will be titled How Open Source Economy Saved the UK 12 billion pounds. So, welcome and screen is yours. Okay, great. Let me, I just need to change over to the different screen. Okay, yes. So, my name is Mark Radak. I was the technical director for the UN Global Platform and this was a big data platform for collaborating across the UN, UN statistic organizations, so all the national statistics offices globally to learn and understand how to create methodologies to use big data sets to create official statistics. So there was a global vision for, you know, collaboration to harness the power of data for better lives across the UN. And there's two key principles to that from the UN is to leave no one behind. So we have to ensure whatever we did, any country, any level of capability skills or funding or technology was able to kind of use the platform. And we also endeavour to reach the furthest behind first. So if you've got less capabilities or less knowledge or et cetera, we, you know, we target you first to try to bring everybody up to the same level. So, as part of the global platform, it was to support the sustainable development goals. You can find lots of information about that. So if you've just joined and you just popped in to see what I'm going to talk about, I can just tell you in about 30 seconds. So we used open source to save the UK economy 12 billion, about 12 billion, probably more. And we used a various range of open source tools and components. So, you know, GeoMessor, Landsat Data, Jupyter Notebooks, Apache, NIFI, Kafka, HBase, Spark and GeoServer. And we also used a couple of open methodologies, one called Warding Mapping and the platform design toolkit and they're both creative comments. So if you want to go off and look at another one and come back to the video later, that's great. But if you want to stick around and find out how we did this, then let's get into it. So when we think about building a platform to support the UN and their partners to use big data, we start with a strategy. Now to build that strategy, we used Warding Mapping. So that's creative comments. And we also use the platform design toolkit, which is creative comments. So the Warding Mapping allows you to look at strategy from a professional point of view. And this is the same kind of technique used by several people in Amazon and Microsoft, Netflix and other large organizations, plus many others. And the platform design toolkit allows you to look at a platform business model to, you know, how are you going to build this organization? How are you going to build this platform? How are you going to operate it? And how do you leverage different elements of platforms to, you know, for success? And they're both open and you can all, there's a couple, there's links to them in the slides and you can go and find them and take a look. So how do we save 12 billion? So back in 2008, there was the financial crisis caused by the banks. And with hindsight, what the statistic officers noticed and understand is that they could see that trade nearly stopped, you know, cargo ships coming out of China, you know, air freight and freight trade was reducing significantly. And that is now used as an indicator of kind of pending doom. So using, and because they are back in 2008, the UK Office of National Statistics, it took them 12 weeks to officially say, you know, the country's in a recession, we need to do something about it. And they, you know, they notify the cabinet office, the Bank of England, the Treasury and other government departments. And then they can implement action. But because it took 12 weeks to tell officially say, you know, we're doomed, that cost the UK economy 1 billion every week. Now using the open source tools and components that we talked about before, plus the methodologies, we were able to analyze freight and flight data. And then that was kind of analyzed over a weekend by the data science campus in the UK. And there's some links to the, in the resources page at the end. And they were able to give the Treasury the Bank of England some indicators and say, yes, the economy is doomed. Yeah, there is no, there are no cargo ships coming out of China. There are no flights flying. You know, this COVID has caused a major disaster. We need to do something now. And that enabled the Treasury, the Cabinet Office, the Bank of England to implement, you know, various measures that save the economy 12 billion, at least 12 billion. And this, this platform was also used by other governments. So I know between the team that we used to have, we reckon it's probably over 100 billion globally that allowed official statistic offices to analyze the data on flights and ships and understand that there was a pending doom and then inform their various government bodies, departments, banks, et cetera. And then they could implement various measures. Now, if we get onto the strategy and kind of warding mapping, when we, when we look at, when we, you know, when we build the platform, when we look at how are we going to do this, how are we going to get the, what, you know, what's our, what's our purpose? You know, what is the purpose of this platform? And then we need to understand the landscape. So what are the various components? What state are they in? Where do we get them from? Should we use open source? Should we use proprietary? How do we think these components are going to evolve over time? And if we, if we look at the kind of the geospatial landscape, we can probably say with some certainty that it's moving to the cloud. It's going, if it hasn't already, it's going to be in the cloud. So the, the UN Global Platform was built on cloud services. And there's, there's other, there's other practices around like the kind of doctrine. So what are, what are our kind of principles of how we operate and how do we, how do we build the strategy? How do we observe the landscape and how do we make changes? And how do we, how do we put this in place? Yeah. And then this, this is kind of the, called the Wardish, the, the strategy cycle. And you interact around this many times and you may change your purpose. You may, you may, you know, the landscape will change, you know, Amazon will bring it, maybe cogs get released, maybe Amazon adopt cogs as a standard or something. But you can see that when looking at the strategy components in the technical components and which proprietary open source components you should use, you, you, we use this kind of strategy cycle to understand it and we iterated many, many, many times. And one of the things about Warding Mapping, there's a, there's a section here about the how businesses operate. And on the, on the, in the middle there, in the middle column, you've got the, this is how traditional organizations operate. So, you know, those structures that the departmental, the cultures suffers from inertia, the corporate focus is on profit, open sources about cost reduction, you know, learning this through analysts, analysts, you know, we, we bring up Gartner and say, you know, what should we do? Big data is, is used, not, they're not driven by data. And if we look at the, the open source, the next generation organization, say, I know, I guess Amazon perhaps and others, they see open source as a weapon and they, they, they have a better strategic understanding of open source and how it impacts their value chain and then they can make better decisions about strategy and when to use open source, when not to use it and how to encourage it and support it in various areas to support their overall goal of their, their organization. So if we, if we think about like open source and how, how it's used in competition and strategic plays, we, we have kind of like four areas. We have thinkers, the players, the chancellors and the believers. So the bottom left hand corner, we have the chances. So they, they won't impact runs under our industry. They don't understand it. They don't really know what's going on. They'll just kind of wait and see. We have the kind of thinkers. So it, it isn't impacting our industry yet. So we'll just kind of watch it and see what's going on and do nothing. And there's, there's no strategic play in either of these. And then on the right hand side, you've got the kind of believers. So I guess, you know, most of the people that attend FOSF 40 will be believers. They'll be open by default, but they won't do that through any strategic decisions or strategic play in their, in their landscape and in their environment. They'll just do it because they're open by default and they believe it. And then in the top right, we have our players and that they use open and open source as a way of competing against others. Yeah. So if we look at our, our chances, you know, they often believe that open-text and audit won't impact them. If we look at our believers, you know, they tend not to think clearly about impact on value change. Don't think about value changes. You think, you know, we're just going to be open. It's the best thing to do. It's morally right to be open as best to share stuff, but they don't think about that in the context of their, the landscape that they operate in and of their, of their, you know, their strategy. And if we look at the thinkers, you know, they do consider the impacts and they look at the competitors chains, but they don't really do much. You know, they're kind of, they're kind of sitting and watching me. And then we have the players. So the players, they use open technology as a means to compete. And they basically use it as a weapon against their, their, their competitors. And if you create a kind of warning map and you look at your, your landscaping, you look at the competitors and also your, your potential, your partners, you can understand how those are using the open source. And they, do they even know why they're using open source? And that's what we kind of did as part of the platform. And that's how helped us select the right components for the platform and, you know, the open source and the proprietary ones. So if we think about maybe Amazon, so they're, they're, they're a player, they're in the top right corner. They use open sources, a weapon against their competitors. They, they, they use warning mapping. They're very strategic and they know what they're doing. And if we look at Microsoft, maybe they were chances, maybe they're, they're thinkers and maybe they're, they're, they're starting to use it in more of a strategic way. Yeah. It's, it's not there. But if we, if we think about where this comes from, so like through the kind of warning mapping and Simon Wardley, you invented it, then if we look on Twitter, we can see Simon Wardley sat with the CEO of Microsoft. So, you know, do Microsoft understand yet why they're using open source? Are they still, are they, you know, are they chances, are they thinkers or are they players? Yeah. And you can make your own decisions from that. Yeah. Right. So another thing that comes out of the kind of warning mapping and strategy is this kind of a game, gameplay of innovate, leverage and commoditize. So this is quite a complicated diagram, but basically when you build a platform, you let, you let people play on your platform and, and you, you use the metadata from that platform. So, you know, this, this organization's got really big bill. Why, why, why they're using so much infrastructure. Oh, they got so many customers. What they do in all that looks interesting. And you, the good thing about the innovate and leverage and commoditize is you let your customers do all the innovation, then you leverage that and then you build it into your platform. So if we think of Amazon, you know, they'll, they'll let anybody go and build some software on top of their infrastructure. And you know, it's not surprising that Amazon has got, you know, a reputation of eating their customers because, you know, that, you know, that could be their gameplay. That's, that's good. Could be what they're doing. They could be letting people innovate on the platform. They could be, you know, monitoring that, looking at the metadata, you know, what, what are they spending the money on? How many customers have they got? And then if they look like it's going to go into the commodity space, then they may build that into the platform. And we have seen Amazon many, many times. And if you, if you look on some links toward the map at the end, and also there's a, I mean, there's an Amazon book, the latest Amazon book, you know, even talks about this. Yeah. This is what they do. So if we get into the platform business model, so this is not platformers and technology. This is the operating model. This is how you structure your organization, how you, how you, what people and skills you need. And from the platform business model, so all this is in the platform design toolkit. You know, there's, there's quite a lot of information there. And basically the, the platform business model, you know, there's some kind of key points to it, but if you're going to build a platform, you need to prime means of connections. You need A and B to connect. You need to build a date insight of some form. You need some, you know, your customers and suppliers to get together to, to, you know, exchange some value. Yeah. You need some way of orchestrating the ecosystem. You utilize the network effects. So there's a lot of information about network effects. So there's about 16 network effects. And that's in the platform design toolkit. So you can leverage some of those network effects to, to build and accelerate your platform. You need to kind of aggregate that kind of supply and demand. And you need to make it frictionless. You need to make it as simple, simplest user journey as possible. You know, if you think about Uber, you know, connecting the drivers and the people want to get a taxi and put it on an app and making it kind of easy as possible. You know, when you think about the platform business model, that's the kind of things you should be thinking about. But there's loads of information about this. And I've got some other presentations about it online. And I haven't got enough time to go into it. But quickly, you know, when you look at the platform business model, you know, it's going to facilitate value changes. So I want to get a taxi. You know, I need a taxi. I'm a driver. I need to get people in the cab. And it does not particularly own the means. So the platform generally don't own the means of production. So, you know, Uber don't own any taxis, Airbnb don't own any hotels, et cetera. And from the kind of UN point of view in the, the way that we were building the platform, the UN. You know, if you think about the UN, they don't actually do much. Yeah. They're very good at connecting people. And in fact, the UN is a platform in itself. Yeah. But it's very good at connecting people and organizations and governments and again, talk about things, but the UN doesn't do very much. And it doesn't have much money. And in fact, it's probably one of the world's largest broke organizations. So it needs the governments to do stuff. So we think about climate change. You know, the UN can get people together in Glasgow for the COP26. But it's down to the governments to actually do something. UN is just getting people there to talk and get them to agree on things. And if we look at another element of the platform, you know, when, if you think about the kind of that ecosystem, it's kind of viable when you reuse components and services to reduce the cost, yeah, a variety of innovations. So you let that kind of ecosystem help you build that platform out. Yeah. You can also, with a platform business model, it's the potential to generate more in different forms of value at the lower cost. Yeah. So when you start connecting, say Uber, for example, originally it was a taxi organization. Now it's doing food deliveries. And it allows you to generate these different forms of value on the platform. So it's a multi-sided platform. So there's more than one way of getting value at the platform. So now Uber does food sales. And I think in Japan, people get Uber's and they just have, they're lunching them and they don't actually drive them anywhere. So and you need to design your platform for disobedience so people can do other things for your platform. And then you'll generate more value and more information. And the other good thing about platforms is, you know, the value of your network goes up with more numbers of nodes. So it's called the network effect. So the more people got using the platform, the bigger, the more value you get at the platform and the more insight you can get from your users and your suppliers and your partners and that ecosystem. And that's how you learn and develop your platform. This is where you build your kind of learning engine. You need to build something to observe that and understand how your customers and partners and the nodes of the, you know, the network are actually interacting with each other. And it's one of the best ways, you know, to learn and create membership. So the more people you get into your platform, the more members you get and you kind of get these network effects in place. And you know, the platform business model, I suppose the other ones, is one of the best ones for neighbor and wrapper evolution because you leverage your customers and your partners and the insight they give you about your platform and you use that to make your platform better. Yeah, you know, or don't do stuff that they don't want. And you know, it's one of the best, right. So the rate of innovation of platforms or your business is not constrained by the physical size of the platform, but the size of your ecosystem. Yeah. So the more people you get in your ecosystem, the bigger you can rate of innovation because your innovation isn't coming from you. It's coming from your users and your partners and your drivers and the people that use taxis and the people that book taxis to eat their luncheon and then the people that start booking taxis to get food delivered. And you identify that and then you build that. You start to enter a platform and you make you, you know, you get a better platform and the platform grows over time, gets better and more interesting, then you get more users in it, then you get more value over time. And it's, you know, the thing, and basically, you know, the bigger the platform gets, the more innovative, efficient and customer focused it becomes. And these are the things about platform business models and all that's in the platform design toolkit. Yeah. So, I guess, from a, so how did we save 12 billion from more of a, you know, the actual platform side of it? So to analyze that, those kind of flight and ship data, you know, we use those open source components and we've got feeds of AIS which is ship data and ADSB, which is flight data. So ADSB came from the ADSB exchange guys and they're very open and they're very easy to work with. And we, you know, we, we had 100 billion records of flight data in the platform and we were getting about 40 million records of ships every day into the platform. And that was the stack we used to analyze it. And this is quite interesting. It's a report from the data science. So DSTL is the data science arm of really the MLD in the UK. So the Ministry of Defense and they benchmark a lot of big data technologies, open proprietary for you, for mapping basically. And there's, there's a nice report there on the different, different options and how they perform at different types of searches and maps. So there was a stream of data coming from, from the ASB guys. So they, you know, they, they partially hosted in, in AWS and they were, they were streaming the data. And they actually was, you know, 600 million records in the data in the platform. So you've got to analyze this data and, and you've got to analyze this data in a stream. So these are the, these are the open source components we use, you know, NIFI, Kafka, HPAC, Spark, GMS. I don't think GMS is fully open source, but, or GMSI, I think they say in the US. But this was the stack. And this, this allowed us to analyze, you know, over a hundred billion records of flight and 40 million records of ship days every day. And these are the kind of visualization we get. So this shows you the kind of flights and ships. So you can see the shipping, the greenness of the ship. And you can see the kind of shipping lanes, you know, from Asia, down around the South, South, South, South Africa. And then through the various different canals. And you can see the kind of flights and the hubs where the most of the flights are from. And this is an example of the, you can, you can analyze flight data. So these are flights. I mean, this was a, a note report I did a long time ago of flights coming out of Shanghai. But when the, you know, when COVID kicked in and the cabinet office rang up the office for national security, you know, what, how's it, what, what's the impact of this? You know, they could pretty much instantly start running queries and looking at this, looking at live flight data and say, well, actually there are no flights coming out of China or the flights are going to these countries. So they're going to get impact by COVID. And then that's going to have an impact on this type of trade for example. And then, you know, we could look at, you know, cargo ships. So these are cargo ships going out of Shanghai, for example. And you could see there was a massive reduction in shipping and cargo ships leaving China. And you know, we, we could, you know, from the office statistics and the, they come to work, they could understand that that's going to have a pretty disastrous impact on the economy. So they were able to produce, you know, those official statistics. So that's it in 20 minutes. So there's some resources, some links, so links to Simon on Twitter, Simone on Twitter as well. So they're the creators of body mapping and platform design toolkit. Some various other links, link to the global platform at the top and some links to the data science campus on how they created indicators using kind of geospatial data. So I think, I think it's time for questions now. Is that right? Yeah, yeah, thank you. Thank you. It was a great presentation. I'm sure everybody was able to follow well throughout. So there are some questions, so I'm going to read them to you. So first of all, can we have the papers or the details of the strategic, strategic analysis you've made? So if it's possible, can you share them? Is it possible to share the two venues? There's a YouTube video of that, of me, there's several YouTube videos, one of the, talking about the strategy at Mac camp, one at the Amazon user group. So I can probably put some links into the presentation, but that's all kind of online and publicly available. Okay, okay, so if you would provide those links or the venue, this chat, that would be great, which discipline, metal model of research technique you made your strategic model looks very interesting? Yeah, so that's the warning mapping. Yeah. Okay, so, warning mapping and how do you evaluate your socio-technological ecosystem? Socio-technical ecosystem, I'm sorry. Oh, interesting. So you can use that for, you can do that for warning mapping. So if you look at, if you look, follow Simon on Twitter, he has done some various tweets about the socio-technical ecosystem, yeah. So I refer you to Simon, yeah. Right. Or sorry, the other thing is to go to Mac camp, which is mid-October this month, yeah, and there's some, there's a track on that actually, I think. Right, and the last question, the hub troublesome was the migration to open source, what were the obstacles that were there? Yeah, so this was all green, there was no migration. I would say the migration side really would be technical skills from the people in the statistical offices, understanding how to use Python, our big data. So it was more from a technical skill capability point of view, not from a technology point of view, because the technology was all green. You know, when it's part of that, we looked at what technology was available to commodity, so we didn't have to build it ourselves. So we used quite a lot of cloud services, and then we used open source, and then being the UN, we can leverage the UN brand to work with various different partners to, you know, get support from them. Great, thank you. So it's all be cool, I think, if you would be able to share some resources through the chat for the audience, and if you have anything to add, if you have any comments, we may have two or three minutes for you before moving on to the next presentation. Yeah, from my point of view, it's all open, if you've got any questions, just come and ask me on Twitter or get over me on LinkedIn or something, and then I'm happy to do 30 minutes, 60 minutes with you, and go into a bit more detail or in quite a lot of depth if required. Okay, so thanks a lot for joining the session. Thanks a lot for having this presentation of Force4G. I hope it's going well for you, and I'm hoping that it will be a great conference for you from now on as well. So thanks a lot for your participation. Thanks. Okay, so we are going to have a five minute break before continuing with on the next presentation. The next presentation we will have Gakumin Kato, Majura Shu, and Epa Utabajimana from UN, UN Open GIS Initiative. So use cases of open mobile GIS solutions in the context, UN peace operations. So please stand by. We are going to be having them online in five minutes. Thank you.
|
The UN Global Platform was developed using open source and proprietary software to enable the production of economic indicators. Using GeoMesa, Kafka, GeoServer, Python and some analytics. An indicator of pending COVID doom was provided as an indicator the UK Treasury, Cabinet Office and Bank of England. This enabled savings of at least £12Bn for the UK economy. Post the 2008 financial crisis it was identified that AIS shipping location data was a good indicator of pending doom. The UN Global Platform developed a location analytics platform for analysing AIS and ADSB data in real-time. When the call came from the UK Government to ask what the impact of COVID-19 would have on the economy, the Platform use used to develop economic indicators of pending doom. Authors and Affiliations – Mark Craddock Co-Founder & CTO Global Certification and Training Ltd Previously, Director UN Global Platform Track – Use cases & applications Topic – Data collection, data sharing, data science, open data, big data, data exploitation platforms Level – 1 - Principiants. No required specific knowledge is needed. Language of the Presentation – English
|
10.5446/57274 (DOI)
|
So, our first session today is by Linda Kadypova. Hi, Linda. Hello. Hello, guys. So, I'm Linda Kadypova and first probably I should first tell something about me. I'm a PhD student studying at Czech Technical University in Prague in the branch of Geometrics. And I'm particularly focused on big data and the processing in databases because besides my other interests like Gras, Gia and Gouy, I also work at the Czech Office for Surveying Map and Cadastral Building on databases for displaying cadastral data. But today we will not talk about databases, but we'll talk about a different thing which I've kind of fallen for and it's Gras, Gia and Gouy or let's say a new generation of Gras, Gia and Gouy. So, the table of contents is quite straightforward. We will talk basically about Gras, GIS, before changes and Gras, Gia and Gouy after changes. And to begin with, I have such a short story about a new potential user and an unfriendly start-up screen. So, let's have a look at it. So, here we have a scene one which is maybe quite familiar to many people who are going to start a new software, at least for my own experience, I'm always looking forward to start a new software to try something new. And then we have the scene number two where a new potential Gras user starts GIS, Gras GIS and suddenly sees the start-up screen. And as you can see, there's three strange places in this start-up screen, namely, select Gras, Gia's database directory, then select Gras location and select Gras mapset. And a new potential Gras user doesn't know anything about these data structure elements. And he needs to fill something. So, a new potential Gras user gets confused because he has no idea what database location and mapset mean. And then we have actually two options. The first option is a good one and it's an option where a new potential user starts searching for some information in the documentation and he will finally start to understand what to do. And then we have the second option, much sadder option, when a Gras potential user just remains potential. So, here I would like to say something about Gras before changes. How did it actually look like? Maybe those of you who know Gras, who were used to meet the start-up screen, each time Gras was started. And if I found that correctly, the start-up screen was part of Gras GIS already from the version 6. And then inside the Gras GIS, there was the pool data catalog. The data catalog having minimum management functions. And also, you can notice that it wasn't very important because it's the fourth tab basically. So, we had kind of paradox because on the one side we have quite confusing start-up screen, very unfriendly especially for first-time users. And on the other side, you had the data catalog with quite huge potential but so far, miserable functionality. So, how did it actually change? Last summer, I worked on the new start-up mechanism project. It was the project of the Google Summer of Code. And with help of all of other Gras developers, we finally managed to change it. To change it to the way that hopefully is much better for first-time users but I think as well for normal users, professional users. So, in this place, I would like to introduce you to the new generation of Gras GUI. Probably the most challenging thing we faced as developers was the existence of the start-up screen or rather the removal of the start-up screen. And in the new version, Gras 8, there's no start-up screen anymore. Instead of the start-up screen, all data management functions are the part of the data catalog. So, we can say that the data catalog is now a center point of Gras GIS. And it's true that there are still some, like, there's still the confusing or kind of confusing data structure hierarchy. But now, at first glance, you can see the structure in the data catalog. So, it's visually better represented. And I think it's also something valuable for professional users, not only for first-time users, but especially for first-time users. So, here, as I said, the data catalog nicely captures data here. And as the title of this slide says, basically, it's that the data catalog takes over the role of the start-up screen. And it offers even more. For example, there's the set of the basic functions that were also accessible through the start-up screen for creating, renaming, and deleting databases, locations, mapsets. But we have also some new functions. For example, adding multiple Gras databases or new management icons for quicker work, basically. And then there is, for instance, interesting thing called Mapset Access Info, which tell you the state of the mapset, basically. So, you can see that the mapset is in use or owned by a different user or that the mapset is current, for example. As you can see, the data catalog in new versions is much richer than it used to be. In this place, I would like to emphasize the difference between the data tab and the display tab. Earlier, it was called layers tab. You can see this here. Maybe some of you are confused now because you were used to have the layers or, let's say, display tab in the first place in the role. And now it's actually in the second place. But the display tab is still here and it takes care about individual map layers and their design, their symbology. Nothing has changed about it. Only the order of tabs has changed. Well, we move on a bit and as I mentioned somewhere in the beginning, I mentioned the colocation first-time user or newcomer. And we devoted to this topic how to enhance the first-time user experience in GrassGIS. And besides the removal of the startup screen, we also implemented a set of infobars. So you can see here the example of infobars used for the default location. I will talk about it now. So when you start GrassGIS for the first time, you will be redirected straight to the default location. You can imagine the default location as the example of the project in GrassGIS. And this default location contains the vector world map and it uses the coordinate system EPCG 4326. So you are advised by the infobar to change this coordinate system because likely you will need different coordinate system for your analysis, according to your data. So then we have the second option, how GrassGIS can start and it's probably the most common option. It's the option when GrassGIS starts to last-use mapset. And there's one condition if it's possible. If it's not possible, we have special cases when the last-use mapset is not found or is slot by another session or owned by a different user. And then GrassGIS starts to an empty temporary location. And as you will see, again, you're advised how to move on in this case. So it was something... I thought I'd use something about the new starter mechanism. And then I would like to show you the so-called dark mode, how it looks like in GrassGIS. So personally, I think it looks quite nice. If you think the same, you can try it out. It's very easy to try it. You just need to change your settings of your operational system. So for example, I use Ubuntu and to change the mode, I use the utility called NOMTWIX. Well, and we are heading now to the last topic. It's the single window GUI topic. And it's something in the process of implementation, but it should be soon accessible through the development version. And probably it will be accessible in the version 8.2 of GrassGIS. So it's something I've implemented through the Google Summer of Code this year. And I implemented just first steps, so it's something which needs many improvements. But you can at least see some brand new screenshots, how it could look like. So it's built on the notion, I would say, of dockable paints. So all tabs, as you know them, will be in the form of dockable paints. So here you can see some examples. So for example, you will be given the option for minimizing the paints. So here the data tab is minimized. All paints can be minimized. Also you can split the center map display notebook. So here you can see the example. Yeah, and it's everything basically. So I would like to thank you. I and the GrassGIS hungry cow, which I borrowed from my Google Summer of Code mentor, Vasek Petrasz. So we both thank you very much for your attention. And please, if you have any questions, don't hesitate and ask. Thank you very much for your talk, Linda. The grass looks amazing right now. And we have a question for you. If I add new mapsets or locations via the command line, will the UI pick them up automatically? Yes, yeah. It will add automatically. All right. And so far no more questions. Maria is telling that they are going to miss the startup. Oh, once more, sorry. Can I see some of the questions? Come again? Ah, okay, I can. You can see the chat in menu less as well, but I don't know more questions. If it's fine with you, we can end here and we can start set up the next talk. Okay. So thank you very much for your attention again. Thank you very much, Linda. Bye bye.
|
In the past, few people would have described GRASS GIS as intuitive. Therefore, it was even more surprising when I got this response from beginner students! Come and listen to a presentation talking about the new generation of GRASS GUI. You will be surprised by the intuitive start, convenient data organization, GRASS in dark theme mode, and many other pleasant functions which me and the development team have prepared for you. Stay tuned! This talk will highlight the major improvements of the GRASS GUI in version 8 which is much more user-friendly to existing users as well as to completely new ones. Authors and Affiliations – Linda Kladivova (1) Vaclav Petras (2) Anna Petrasova (2) The GRASS GIS Development Team (3) (1) Faculty of Civil Engineering, Czech Technical University in Prague, Czech Republic (2) Center for Geospatial Analytics, North Carolina State University, USA (3) Global Track – Software Topic – Software status / state of the art Level – 1 - Principiants. No required specific knowledge is needed. Language of the Presentation – English
|
10.5446/57275 (DOI)
|
Now, we are going to hear Martin DeWolf from Blue Square. He's the CTO at Blue Square. He's a Belgian company specialized in public health projects in emerging countries. He will talk about LASSO, all in one platform for data collection and geographical registry. So we are going to welcome Martin. Hello Martin, how are you? Fine, fine, thank you. Thank you. It's YASSO, but everybody has a problem. Ah, sorry, sorry. So it's a night. So do you want to share your screen or introduce yourself? Maybe I didn't say something important. No, no, no, everything was white, so the CTO of Blue Square. Okay, that's very nice. I'll tell a little bit more during the presentation. Okay. I don't see your screen. Perfect. Great. Yes. Okay, I leave you to it. So I'm going sometime to switch from my slides to a live demo because I like to live dangerously. So this presentation is about YASSO. So the font can have a doubt about the name if it's YASSO or NASSO, but it's YASSO. So it's a platform for data collection and geographical registry. So it's a new web plus Android platform that is geared toward enhancing geographical data by benefiting from the routine health data collection activity. So I'll show you exactly what I mean by that. So about me, yes, I'm a software engineer by trade working at Blue Square, which is a public health software company based in Brussels in Belgium. I'm now the CTO of that company. My technical skills are on those technologies listed there by 300 and so on. And before that, I had a long history in startups and PhD in computer science from the University of Brussels. So that's about me. Maybe now a few words of where we are coming from. So first the name of the software, it comes from Greek goddess. So it's not an acronym. It's a goddess of health. So hence the name. So it's built as Blue Square. It comes from two projects, basically one which was about sleeping sickness. And I won't go into the details, but the main thing that came out of that is that there is a big routine activity about testing people for sleeping sickness, sending teams in villages that are actually not registered yet in official records. So the idea is that you send a team to make tests to people, but at the same time you collect GPS points about the village where they are living. So we are talking about countries where the mapping is actually not complete at all. So there was this first part, this part was about a system where we do data collection that is only made for sleeping sickness. And then we were also working with open source tools derived from ODK, so open data kit. And this provided us the technology to set up generic solutions for surveys and forms. And basically we merge both ideas. The rest of this talk will be about what this means, merging a system where you take a GPS point and you add information to an existing health pyramid or tree of org units, that's the name we use for that, and using a generic form. So let me then rephrase the concepts that we use in that system. So we have basically two important concepts. We have forms, forms as you can feel online, but you can specify through Excel files. So you can specify questions that have a type. For example, you can ask for an image or for GPS points or obviously for text or numbers. You specify the name of the question and the technical name and how it's going to be displayed in the mobile application. So this is ODK, I'm not showing anything that we invented there. But you have those forms and those forms can be then interpreted and presented in a mobile application or on the web so that you can fill them. And actually we added to that existing technology just the idea that you have a hierarchy of org units. So that's the picture I made here where you have, for example, countries and the top province households and this is configurable. It could be about schools, for example, and not about health. So we have different types of hierarchies that are living in the system. And so we have those hierarchies and each one is made of what is called organizational units and each one has those essential features, which is a name, a parent, a GPS point, or a territory or none of that sometimes, obviously. And we also have external IDs which will allow us to link to other systems and very importantly also a notion of groups, for example, two groups, the facilities funded by a given NGO, for example, or this kind of stuff. So these are the two main concepts. And what do we do with that? We do two things actually. So we'll do support a lot of routine data collection for various health systems. So we have all those apps on the Play Store, so it's Android only. Data are used to do data collection for health facilities mainly, but also there is some projects there about schools. The mobile application is working offline completely. It's centered around the notion of territory hierarchy and one small but important feature is that it's possible to suggest the addition of our units in that application. So this is very generic. So I will just show the application so that it's a bit more clear what we are talking about here. So it's an Android application. So here I have a simulator. I'll show one of these, but this is the one about collecting data about community health sites in DRC, so it's been deployed nationally now. And this is a very simple app. It's an app designed to have basically two buttons initially, one that allows you to start. So this is a French speaking app. This is a start button, and then you have an upload button to just upload the data. So what happens when you start encoding? You have first to pick where you are in the country, so very classical I would say, but you have to say in which province, then in which health zone you are, and so on. And I go to another one. Then in which health center you are. And then you are at the stage where you can add another unit, and for example in this application, so it's constrained, you can only create a community health site, not provinces or you cannot create everything anymore. But you can create a new unit, and it will appear in the hierarchy, so now it's in there. And for this one, you can fill a form, and we have two types of forms. Some forms are specified as I told you as Excel file, and you can start to fill them immediately, so you have a date. This is a form source, it's not very surprising, but what's cool is that in that kind of form, you can use the features of a mobile phone, and importantly you can ask for GPS points, and you can take pictures, so you get a fake picture. So you have this very simple form application that allows you to fill those forms, so I won't go to the end of the form, but you can fill all this and then send it to the system. And what we allow then is simply you get this data collection effort that will get you all those facilities appearing in the system for each one, so I'll just pick one for me so that you get an idea of what this looks like, but you are able to see the data, so this is a community health site that is appearing in the data collection with this picture, all the questions that you can fill there. And then it's linked to a health center and then a new community health site, and so what you can do then for such a community health site is validate it. So actually somebody that knows the country can double check that the data is correct, that the location is correct too. So I think this was this one, this one, there is a form that is linked to it obviously, so I need to... Yeah, yeah, here it is. So now you have, you know, there is a form that is linked to the community health site and you can decide that this is for example the official location of this site and you can do this kind of stuff, verify the data, verify the forms, and then when you think it's okay, you can change the status of this community health site and say that this one is validated and should appear for everybody in the system. So this is the part of data collection, so I'm about half of my talk and I cover the data collection part, so it's quite classical. So let's go back to the slide. The second part that is important into that system is that we actually, we allow you to load into the system multiple hierarchies. So you have a reference one usually, so we're mainly in health but it can be anything. But what happens a lot in projects is that actually you have the official data source, then you have multiple copies that derive it from it over time and there is additional data that is collected over the other hierarchies. And so basically in Iazzo what we will allow you is simply to have those hierarchies existing in the same system, so I'll show you what it means just after, and you can create links between actually the two, so for each org unit present in the system, let's say this is a facility, a health facility, you can find the equivalent in another data source and say okay those two are linked and then you can start to compare the data that you have on both systems. So this is the kind of stuff that we've been doing in some countries like Niger where we have been merging multiple data sources, for example there is the HMIS, the national health system that contains some information about health facilities, there was another data set containing information about communes which are small regions let's say, and then you have information about villages and you have to take all this and merge it together to the system. So we have a system that will allow all this to exist together in the system. And to link, so if you produce a result and then a referential for the graphical information in the country, you can actually link to the original data and understand where it's coming from the various data sets, so we do exactly that. Yes, and so we allow comparison and merging of multiple data source, there is as I showed you various status and comments feature allowing validation of the referential in a distributed way also in the sense that many people can interact, it's a centralized system of a web dashboard with different authorization and so people can validate for example only information for their region and we also implemented a lot of traceability and possible rollbacks on the data. Let me just show you what this all means practically. It means that for example, so I picked a good example to go faster obviously, but this means for example that you can in the same system discover a given for example, this is a health zone that is called the Asabonga and actually you can see the official shape which is in blue here, but you can also see the shapes that are existing in other data source which are the one in red actually and you can also see information about the health facilities that are supposed to belong to that health zone so you can easily spot that some of them are outside of it, so it allows quite easily to end user that are not so used with the GIS system to spot problems so we create this interface that will allow you to compare the various data source. So this is the first example, so here you have the original one and actually all the colors are giving you the information about where the data is coming from other sources, you can browse all this so it's very visible and you can also decide to reuse some of the shapes to replace the one that you have or to use points that you have here to replace the one that you have to improve the system and any change that you will make to the system will actually be kept so for example here I made a small change a few minutes before so you can see that one hour ago I made a change and I changed the geometry actually of this and I changed also the aliases so there are additional names that you can encode for each of units so for example this region is called YASA-BONGAZON de santé sometimes it's called BONGA-YASA-YASA-BONGA there are alternative names that you can have here so we support all that and also we allow you to browse the links to other data sources that matches the slides I was showing just before and you can for example see that in that data source there are different namings that are used the creation data may be not that interesting but for example you can also see the difference in shapes and so on so it's really done to support this comparison and merging of multiple data source okay and again normally here we don't have a form but if there was a form you could find it here and also use that information to enrich your system okay I think I covered quite a lot I showed what you have for shapes but obviously it works also for points so for example here we have a health center that is in Kinshasa in Congo and you can see for example that there is an official point here but that through various data collection we have additional points that have been collected for lots of various activities and actually you can just see all of them and decide which one is the white one and this is a work that is supposed to be done by people knowing the field and able to make the decision they can also reuse sometimes just the information that is available through open street maps through the maps that are actually the bottom layer there we have multiple layers that are possible and we support also features or comments if you need to put a comment on something that you think a bit strange okay so that was for the demo so the big picture is that so Yasuo is a system that is able to handle hierarchies of org units with geographical information we can send that information to a mobile application where it can be enriched with additional org units and then forms that you fill out of the org units we can load data from additional sources CSV files, geo package, NGHI2 which is one of the main things that we are working with which is an open source software for health data and also we provide an API so you can actually work with that data create links create algorithms that will match data between various data sources and so using an API and we do a lot of that using mainly jupiter notebooks and in general data science tooling okay so I covered what's in this slide so what I didn't have time maybe to to mention is that we have a lot of additional features in terms of multi-search or doing search over multiple data sources at the same time it's multi-tenant so you have multiple accounts for different geos on the same server see it's open source I forgot maybe to mention it so far but it's available on github and you can install and test it and we have also lots of feature flags in terms of mobile apps so you can have authentified user or automatic upload when network connection is available because the mobile app is working totally offline but you can force it to upload data every time a collection is appearing in term of tech we are as I mentioned Python and ODK based and maybe what's important to know is that we are going in the direction of supporting micro planning activities with this software which means planning the activities of the team in the field based on the organics and the form to field so also to save data and geographical data for example so what is Ciazo? It's a routine data collection tool supported through generic mobile apps it allows enhancement of geographical data thanks to mobile device GPS so that's actually what's happening now there are new devices appearing in those countries and we can start to use it it allows comparison and merging of multiple data sources in a single data set it's a validation tool for officials and it has interfaces both for user from non-programming backgrounds so that's what I show but also for data scientists through APIs and various simple screens so that's it and maybe last point I mentioned here also that they were in the short term the will to do integration with open-street map from importing data but also for exporting and we could compare Ciazo at some point with the philosophy of open-street map in the sense that it has the same idea of crowdsourcing data but the main difference is that we do validation by officials so the end goal is to provide tools to officials of countries to enhance and decide on the referential geographical reference for their country and that's it thank you Martin that's a very nice presentation it looks very very useful the app so let's check questions please post your questions in the question pool no question till now I have one I wanted to know if I didn't understand and wrong you only can upload point features no no no no no you can upload shape we have a geopackage import mainly but very very often geographical feature will come from DHS2 which is the software used to host the health data of a lot of countries in the middle of Africa oh okay that's great and the info that you can connect to that features is also wide or you just have some only some fields that you can like structure fields or is free but space yeah so the geographical information you have shapes or points to for those records but to any of those records you can link as many forms that you wish for there is one of those forms that is the that is supposed to be the the the how to say that the standard one with the the information that is official but you can also have multiple variants of that data so for example if you want to to know what was the state of the number of beds of health facilities two years ago you can have a form that is give you that information and then you have the current value in another form that is the current one so we do okay great that sounds very useful you are currently using it yeah yeah the DRC we're working in it for our pbf so finance for health systems projects in at least eight to nine countries currently so yeah that's quite a lot of usage now do you know if the local authorities are using this information for management of this question of these things sorry yes yes we the system feedback into a dhs2 and it means that for not only for Niger and DRC we've been asked to to to feedback information into the national health system that had been collected and curated through through the system okay right we have one question from can the user interactively modify information stored can any can the user interactively modify information stored yes yes you can modify shapes points anything but but uh and and we try to keep the traceability of all changes too so so yes but we also know who made a modification and we can hold back that great okay thank you very much martin for your talk it was very interesting um okay we say bye to martin nice we are going to be back in five minutes with rosa ilar and her talk about an open geospatial interactive tool to support collaborative special planning
|
Iaso is a platform created to support geo-rich data collection efforts, mainly in public health in emerging countries. The key feature that it supports is that any survey is linked to an organizational unit that is part of a canonical hierarchy. Each one of these org. units can have a location and a territory. The mobile data collection tool can be used to enrich this hierarchy with additional GPS coordinates, names corrections, etc ... which can then be validated by officials of the organizations in question through the web dashboard (this is consequently similar in some aspects to OpenStreetMap, but with validation before integration in the reference dataset). This leads to continuous improvements of the geographic references available through the routine activities already planned (e.g. locating and registering health facilities while investigating malaria cases). The tool has been used in multiple data collection efforts, notably in health services in D.R. Congo, Niger, Cameroon, Mali and Nigeria and is more and more used to compare multiple versions of official organisational hierarchies when a canonical one needs to be rebuilt. We are for example working on such efforts to rebuild a school map for DRC with the NGO Cordaid. To help for this type of project, we provide location selection interfaces, multiple levels of audits and an API open to data scientists for analysis and mass edits. This presentation will demo the main features of the platform, and give some context about its creation. Iaso has been created by the company Bluesquare (https://bluesquarehub.com/, based in Belgium), specialised in software and services for public health, and has become open source under the MIT License in November 2020. It is still under heavy development, but is already the basis of at least a dozen projects. On the roadmap, we have features for a patient registry, monitoring tools for data collectors, and microplanning activities (producing routes for monitoring teams, or vaccination teams). Iaso is made of a white labeled Android application using Java/Kotlin and reusing large parts of the ODK projects and a web platform programmed using Python/GeoDjango on top of PostGIS. Frontend is mainly React/Leaflet. One of the aims is the ease of integration with other platforms. We already have csv and geopackage imports and exports and target easy integration with OSM. You can find the platform and its documentation here https://github.com/BLSQ/iaso The mobile application is available here https://play.google.com/store/apps/details?id=org.bluesquarehub.iaso.demo Authors and Affiliations – Martin De Wulf, Bluesquare. Track – Software Topic – FOSS4G for Sustainable Development Goals (SDG) Level – 2 - Basic. General basic knowledge is required. Language of the Presentation – English
|
10.5446/57276 (DOI)
|
OK. OK. What is it? So apologies. Sorry. We can't hear you. OK. OK. OK. OK. OK. OK. OK. OK. OK. OK. OK. OK. OK. OK. OK. OK. OK. OK. OK. OK. OK. OK. OK. So apologies for the technical difficulty with the video. Renee Luke's going to proceed and just give the presentation live. We have a little bit of extra time, but we'll have to stop promptly at 7.25. So oops. That's the wrong, the wrong square, square deck, sorry bar tech. Okay. Okay. Okay. No. Okay. Thanks. I hold. Thank you for attending this presentation. My name is Vaneluc and I'm the company manager. We don't know what this is. We are eight since March 2020. We are QGIS and post-executive lovers. We are QGIS core contributors, mainly QGIS server because we are QGIS server maintenance and we have been doing open source for 15 years and we'll never stop it. We provide server hosting, prepopunt training and other services. For those who don't know QGIS, it was a simple post-executive viewer. It is free and open source. It runs on many operating systems and it supports many file and database formats and provides many features like printing, main processing that can be reused with QGIS server because QGIS is not a new QGIS desktop application. It is also an OGC data server. It's QGIS server, a GIS processing console tool that provides the to run the QGIS processing in the console with command and is also a GIS mobile solution with Q-field. If you need to publish web map application and you are QGIS user and lover, you need a full feature application with control access, editing, printing. You don't want to do what you already done on desktop for the web and you don't want to or can't code for the web. So this map is done for you because this map loves QGIS desktop QGIS. The QGIS desktop project is the web map configuration with symbology, printing, composers, attribute table, editing form, expressions. As Q-field, you prepare once and deploy everywhere, here and the web. This map provides a plugin for QGIS to configure only specific map options, only specific web map options like scale, tools you want to provide with your application, measure, printing, edition, access control. This map will provide a web admin panel for access control to manage users and authentication and it is open source under modular public license. This map has been a simple QGIS project viewer based on QGIS server and OGC standards. But this map has become a full feature webGIS application by integrating QGIS features like printing, relation, expressions. So you can build not only a web map but really a web map application. Publishing with this map is quite simple. First of all, create and set up your QGIS project, layer, symbology and server properties. Then you use the LISMA plugin to set up web map options, extend scales, the tools to be available with the map like editing, measure, printing and other. And finally, send the QGIS project LISMA configuration files and that are not already stored in database to the server and yeah, you've got your web map application based on QGIS project. So in order to illustrate the main functionalities of LISMA, I will present some use cases. The first example is the use case of a little city in French Alps to publish thematic maps for citizens which is focused on simplicity. Here is the landing page with the web maps. The landing page has colorful images to illustrate each map. And here is the map with only the necessary data. Here is the public garden with formatting of complete, for the information of the place. The second example is about the capability to custom the user interface. The one case is the Calvados which promotes normal landscape and the landing page. And the second one is the guard, promotes antics monuments with the arena of nemes here in the back of the landing page. In order to enhance the value of geographic data, it is important to properly format the associated information. Here is an urban agency, planning agency, use these tooltips in expression to highlight key figures of local business parks. In this example, the map displays the use of surfaces and the distribution of jobs. To illustrate the LISMA edition capabilities, we have created four night observation maps in which you can test in the demo, in the observation place. The LISMA form is based on QGIS form field edit widgets. The LISMA form is able to reuse the QGIS drag and drop form conception. The geometry can be drawn with click on the map or with the GPS. And this map provides access control to the addition tools. Here is the map. You can click on the edition button and can click on add button to add a final observation. The LISMA will display the form based on the drag and drop form QGIS form with group like general data information, illustration, or field. You can see that the datetime widget, some selector, and the file widget for picture. If you are on Android or iOS, the browser lets you take a photo and you have tools in the top for manage point vertex coordinates and the GPS. When you have drawn the geometry and save the form, you can click on the data and get the popup. On top of the popup, we add the toolbar with the update, delete objects. When you update the object, the feature, you can keep pictures or change the image, update all the values. In this map edition capabilities, it is possible to use the QGIS expression defined in form. This expression is useful to define default value constraints on fields, field group displayable, or drawn forms. This map is reusing all of this. In this example, just for test, if you tap other names, the other group of fields is displayed. If you tap photo in description, the photo tab is displayed. It is not all. In addition, in this map, we have extra capabilities. The user can control the GPS precision for drawing geometries. For points, you can choose the number of points to define the point itself. For a line of polygons, you can define the step between two points in meters or in time as a precision. You can activate and configure sniping and we provide the geometry toolbar to update the geometries to move to turn a crop. But this map is not only for a map. It can be used to build graphics. Here is an example for FlanDews data exploitation in Narbonne near the Mediterranean in AC. All the graphics you can see at the right of the map and data as we configure in this map. No JavaScript needed. Here, the text and figures are based on HTML and QG6 expressions. The last example is about FlanDews data and fluidine risks. In Narbonne, they also use it LISMAP to help users find local shops or local products. It is based on the form filter feature of LISMAP. Here, you can find a place to buy some organic wine or cheese. This application has been announced with JavaScript to provide a more user-friendly interface. The same form filter feature has been used here to help finding places to get digital help. You can find a place with Internet connection to do your text declaration or printing an administrative document to be completed. We have edited the QGIS plugin for French cadast and we have built a LISMAP module to get the same feature as the QGIS plugin in LISMAP. This module can help to search a parcel based on address or owner and the user with this module can get administrative documents of the parcel form. We have also built a module to help municipalities to manage addresses. It provides an export tool to French address standards to help building the national address database. In this example, we are in Deauville where the addresses have not been validating yet by the municipalities. LISMAP is a growing community. The last contributors are Japanese. LISMAP is used by private companies, public organizations, research centers. Main contributions are localization and documentation. Some contributors share the JavaScript script to extend LISMAP capabilities and others do some bug trading. Here is the LISMAP trans-effect local pages. You can see that French Galician or Portuguese are fully trans-localized. LISMAP is free at download and can be freely usable and used. Here are some examples that we do not provide to other users. You can see South Asia in South Atlantic with CYRI and Portugal in Europe. A map made by OpenChist.ch for live Q-field users maps. This map has been built by CYRI. It is a map of Falkland Island natural assessment with a lot of information. I have already presented it, but LISMAP is extensible. You can build some server-side modules with PHP. You can add capabilities with JavaScript scripts like Google Street View and some helps with embedding JavaScript. The theme can be updated with just CSS. For module is MapBuilder. You can build your own map based on the layer of publishing in LISMAP. You can add some content in the map with iframe, video, audio. Here are the GitHub pages for the LISMAP scripts. You can see some GPX, keyboard shortcut, Google Street View, and the MapBuilder module. The actual LISMAP version is 3.4. The next release will be 3.5. We hope to publish it in less than one month. In this new version, the artwork has been done under the wood with refactoring tests and the consolidation of the code source to provide a really good software. We have added only three new features with reverse geometry buttons for line string or polygon. A dynamic generation of the pop-up from the QGIS form before you have to use QGIS tooltip to provide something based on QGIS form. Now you can activate an automatic dynamic generation of this pop-up. We add an access control based on polygons. You already have access control based on attributes. With this new release, you can associate polygons and user groups to filter or control the addition capabilities. To go to LISMAP version 4, we plan to write a roadmap. The LISMAP version 4 will be based on Jelix 1.7. Jelix is a PHP framework used to build this map. We will replace all parts of OpenEars 2 to OpenEars 6 on LISMAP. We will use ECMAScript 6. We will use standard web components. We want to reduce dependencies in this map and do not use some cool frameworks like React or Vue.js. To do so, we have planned to update little pieces of the map. For example, we will start with using OpenEars 6 for background layers and display OpenStreetMap on other background layers based on web markers to coordinate reference of the map in defining QGIS. The web map will be exactly the same as the one in QGIS desktop. We will use Jelix 1.7 to provide significant URL. Then we will move tools from OpenEars 2 to OpenEars 6 like editing, visual printing. We will probably do some other changes. We will be at LISMAP in OpenEars 4 when we can swap OpenEars 2 dependencies. There is a link to the demo website, the documentation, the code compose to use it and test it. Thanks for your attention. It's time for questions. Very nice job, Renee Luke. Sorry about the video mishap, but you did a super job. Let me check if there are any questions. There are no questions yet. If you have a question for Renee Luke, please put it into the questions box in the main user interface. We will wait one or two minutes and then pass it on to our next speaker.
|
In 2021, Lizmap is 10 years old. Lizmap is an Open Source application to create web map applications, based on a QGIS project. It's composed of a QGIS plugin and a Web Client. Lizmap has been designed to take advantage of QGIS Server to facilitate the creation of Web maps. We will present the state of the project, the last changes using on a QGIS Server plugin and futur perspectives. At 3Liz, we are QGIS and PostGIS lovers. We are contributors of QGIS, mainly QGIS Server. We promote Open Source GIS solutions, mainly OSGeo, to our customers. In 2011, we decided to develop, as an Open Source Software, the Lizmap solution. The design of Lizmap aimed to publish Web Mapping applications with QGIS. The objective of Lizmap is to design and configure web mapping applications with QGIS desktop and only with it. No coding skills are needed. Lizmap takes full advantages of QGIS Server: symbology, labels, table relationships, print layouts, forms, etc. Lizmap consists of 2 tools: * Lizmap plugin allows to configure the options and tools of the web mapping application based on the QGIS project * Lizmap Web Client, running on a server with QGIS Server, delivers the user interface of the web mapping application from the QGIS project and the Lizmap configuration. Lizmap offers the possibility to publish simple web mapping applications for data consultation, but also to build advanced applications allowing map printing, data editing, search, dataviz, etc. Lizmap is also extensible. It is possible to add your own JavaScript and to use Lizmap modules (to add some extra features on top of Lizmap). Finally, Lizmap benefits from a growing community (localizations, documentation, JavaScripts, bug triaging, etc) and it is used all over the world (Indian ocean environment survey, Georice in South-East Asia, SAERI in South Atlantic, etc). Authors and Affiliations – René-Luc DHONT (3liz), Michaël DOUCHIN (3liz) Track – Software Topic – Software status / state of the art Level – 1 - Principiants. No required specific knowledge is needed. Language of the Presentation – English
|
10.5446/57277 (DOI)
|
So, hello everyone. We'll be back with Gautier Wili. I think it is this, the pronouns. So, okay. Hello. Hello. And the next talk is my rhythmic big data analysis with Orla, with Gautier Wili. Gautier is a data scientist from the Department of the Data Science of the Chisaya, I think is this. Wili can say more than I. So, Gautier, this stage is yours. We will have all the time, I think, 20 minutes, okay? Okay. So are you sharing the screen? Okay. Thank you. And hello everyone. I would like to thank the Phosphorji organizer for, and all the speakers for this nice presentation. I'm Wili Gautier. I'm data scientist for Chisaya and we are a small company specialized in geospatial based in Toulouse in South of France. We develop HALAS. It's an open source solution for big geospatial data analysis. So as you know, the maritime industry has become a major part of globalization and 90% of the transportation of the good is globally carried out by more than 80,000 service vessels. Then political and economical actor are meeting challenges regarding shipping and people transport. And today we're going to sit together, or how HALAS can help for maritime big data analysis by exploiting the VESEL automatic identification system. Then we exploit automatic identification system. It's IIS, a safety system which provides information about the VESEL state. But we'll see that this data contains some imperfections. Then we'll see how we face these challenges to clean the data and to infer trajectories with some algorithms that we provide as an open source library. That's HALAS. And finally, we'll see how complex VESEL behaviors can be detected with machine learning models and how HALAS can help in this model creation process. Then first, automatic identification system. It's a safety system that record and broadcast all the VESEL locations. This message is received by other VESEL and it helps them to avoid collision, particularly when the visibility is low. And it also contains information about the VESEL dynamics and characteristics. All the VESELs about a certain size have to be equipped and it results in more than half billion IS message emitted worldwide every day. These messages are collected by ground antenna and satellites and can be a great source of information about the maritime traffic. However, it requires adapted tools to handle such amount of data. The Danish maritime authority provided IS data received around the Danish coast that we will use to illustrate this maritime data analysis. Actually we want to extract maritime intelligence from this cloud of points. However, as often with real world data, IS records are sometimes imperfection. On this topic, I really recommend you a nice paper of Anita Grasseur that explains the different kinds of problems with the continuous data and it provides a methodology to identify these problems. But today we'll focus on some of them, like missing information, like gaps or missing fields, the imprecision and accuracy problem, such as outlier in location, and also consistency problem, such as sampling heterogeneity of the records. Finally, raw IS data doesn't give any information about the VESEL real origin and neither a reliable destination nor about its traveling time. The volumetry of the data requires adapted tools to handle it. To see clearer among these numerous VESEL locations, we use ALAS exploration. It's an open source journal analytics software that allows to visualize and explore the data. It results in an interactive map when we can navigate within the data and explore different components. Then we can see the spatial distribution and we can see the distribution of other components, such as the ship type, and we can apply filter selection to explore the data. It helps to better understand the IS data and its challenges. Here we can observe the spatial distribution and we distinguish the main VESEL fluxes. If we zoom a bit, we can observe the real boat location and here, called by the VESEL type and we can quickly distinguish different behaviors. But there are some imperfections within the data, such as missing or re-renews information. By exploring the IS record, we observe that some of the attributes, like its type or name, are not available in all messages and it can result in data gaps if we select these values. Here for example, for the same VESEL, it has sometimes its type filled as a tanker, but sometimes as undefined, then if we select all the tanker, we will miss many records. Moreover, the time between consecutive records is not always regular and there are some gaps, there are some VESELs emitting every 30 or 10 seconds, but it can be really different and it can result in a misinterpretation of the VESEL densities. Sometimes it's oversampled for the analysis need to have the location every 10 seconds. We also sometimes observe a large jump in GPS location that should not be considered as possible VESEL motion. When we look at the spatial distribution of the VESEL around Denmark, we can observe out of range location, which can sometimes be even inland. If we see a particular VESEL, its records are quite continuously, but one of them is verified far away from the other and it looks like a wrong location. In order to solve these problems and to extract real maritime intelligence from the raw data points, we process the record to clean the observations and to infer trajectories. We apply algorithms that we provide as an open source library, HALAS-Proc. The goal of this algorithm is to transform the punctual VESEL records into valuable trajectories with their origin and destination, their stops and also the travelling time, the DINSANTS and even other indicators. HALAS-Proc.ML library contains different modular algorithms, such as outlier removal or resampling, that we can combine to create a processing pipeline. This framework is developed in SCALA and is based on APAC Spark framework to be fully distributed. Each block that you can see transforms the input into an output and we call it a transformer. Those transformers are here applied to VESELs, but it can be applied to any moving object data. Then you simply have to import the dependency and use the different functions. You can find the project on GitHub. For example, in order to deal with outlier within geolocation, we apply an outlier detection filter inspired from the AMPAL filter and it computes the GPS speed median over sliding windows and it identifies unrealistic moves. Then this wrong location are corrected. For example, the suspicious location we saw before is filtered and it removed from the VESEL track. We have seen that the time between two observations of the same object can be very heterogeneous in some cases. Actually we introduced the notion of fragment to represent the information by your time interval for a given object. In a fragment, it corresponds to the VESEL behavior between two observations. It has a geometry representing the travel of the VESEL. Its dynamics and such a speed are adding an event the VESEL information. The basic fragments are straight line between two raw observations, but we can also concatenate it and to create a single longer fragment. If we focus on the time between observation and distribution, we can say that we use the fragment concatenation mechanism to harmonize the fragments and to regroup a block of three minutes. Now, instead of having a very varying observation, we create new fragments of three minutes. This aggregation can be done without losing special information. If the concatenation is the geometry, it has all the small line strings, but we can also simplify geometries. Here we can see in red dot the edges of the three minute fragment. In white dot, you have the raw location records and the geometry fit stored for the trajectory. It fits the real geometry and it's much smaller for the fragment. Then the sampling harmonization, it compresses a lot the information and it also makes the data distribution less sensitive to sampling variations between different objects. If we want to identify the trajectories, the first step is to analyze, to detect whether the boat is still or moving. To do so, we use the Haydn Markov models based on VESEL speed to identify the VESEL mobility. The HMM models are robust to noise and VESEL speed to avoid switching between steel and move because of wrong measurement. Here we can see that the model is quite robust. If there are some wrong measurements, it's not detected as top, for example. Thanks to this moving state, we can group all the fragments corresponding to a stop together. Here we can see in white. Here it's a tanker stopped in the sea before a trip that starts after. We define a course as a travel between two significant stops. We can use the same process to group all the fragments of a course to create a single course fragment. We can then visualize and distinguish the separated travel of the VESEL and explore their distance and duration or mean speed. We also know their origin and destination here represented by green and red points. Once we have the origin and destination, we can enrich the courses with the name of these locations. We use the service to get the country and the port name which allows us to select the fluxes that matter. If we select the port of Kiel in Germany, for example, we can instantaneously select all the travels that leave the port and analyze their trajectories and destination. Here we have all these travels aggregated, but we can also visualize the real tracks of the selected VESELs. To recap, ALAS.ProGMA is an open source framework, very complementary to ALAS exploration. The processing of the data allows much more complete understanding of the VESEL fluxes. We have seen that activities can be observed directly in ALAS by looking at the tracks, but the VESEL activities can also be automatically detected from VESEL movements thanks to machine learning. Machine learning models can be used to recognize patterns within the moving object data and to recognize VESELs behavior. Supervised classification techniques can be used to train the model on identified patterns and the model can then recognize this pattern on new data. But it requires a training set containing the data annotated with the patterns and the quality of the model predictions depends on the quantity and diversity of this annotated data. But this training set creation can be really time consuming and it requires tools to visualize data and to recognize the patterns. We'll see together how ALAS can help for this training set creation and how it can be integrated into the machine learning model process creation. Let's consider one pattern, the fishing activity. Actually, there are different kinds of fishing vessels with their own strategy and it can be trawlers, signers, long-liner, drifters. Knowing where and when these vessels are actually fishing can help to better understand activities and regulate overfishing, for example. The fishing activity of a VESEL can be derived from its move and when we select the fishing vessel tracks, we actually observe the different strategies. We observe zigzag with the homogeneous speed. With ALAS, we can quickly visualize these tracks and let's assume we are a fishing expert. We can finally select the part of trajectory where the vessel is actually fishing and where it's only moving. And when the pattern is selected, a tagging system in ALAS allows to annotate the selected data or fishing or non-fishing, for example. And when our training set is unsatisfying, we can download the enriched data as a CSV file directly in the application or via an API. And now that the training set is available as a CSV file, we can use any classical tools to create the machine learning models. We can use psyche, TensorFlow, we can also use different language, Python, Scala, R. And we can try the models and evaluate their performance according to several metrics such as accuracy or recall. And this model classification result can be compared to, according to this metrics, in tools such as MLflow. And here we can see that each line represents an experiment and these tools allow to link metrics with input parameters. But however, it's also important to visualize the classification results on the real vessel track to understand where the model performs well and where it lacks accuracy. And the ALAS Tagger API allows to tag the results and to directly explore it in ALAS. We can then directly visualize the prediction results over the real tracks. And we quickly see where the model performs well. And we can finally analyze the different fishing area and period and it can help decision makers. Here we have seen a maritime application, but all these tools can be adapted to any kind of moving object data. Then to recap, we have seen that ALAS exploration is an open source geoanalytic tools that allow to explore massive IAS data set in an interactive map application. And we have seen that this exploration can be enriched by ALAS Pro-KML, which is also an open source framework based on Scala Spark to process trajectories and to extract maritime intelligence. And finally, we can also use ALAS exploration tagging system to create a training set and facilitate the creation of machine learning models to dive even deeper in the maritime activity detections. And I would like to thank you very much for your attention and feel free to ask any question and don't hesitate to follow us or contact us for more information. Thank you. Thank you. Thank you, Willie. Great presentations. We have some questions here that put it on the screen. The first is if you use Sedona in your ALAS or Scala Spark inclamation? Actually, I'm not sure I understand the question. I could not answer. I can ask one of our data engineer to if he knows this, but I could not answer. You showed more of tools in machine learning in the finish of the presentation. So I think the people can ask you by this email for asking about use of Sedona. So the next question is can the tool be used to help detect the origin of oil leaks in the ocean? How did it happen in Brazil in 2029? Actually, I think it could be possible. Actually, it can be applied through any moving object data. Then if you have the evolution of the oil leaks, maybe it could be visualized in our ALAS, but we could also process it as a moving object data. If the oil leaks is seen as an object, then we will not directly extract it from the satellite image, but if you do this work, it could be followed with such tools yet. Okay. Some questions here. The people say how some talk, how can I run this tech and explore the code? You can find it on GitHub, on our last project. And the stack, actually, it will be explained how to deploy all the stacks. It's only open source tools. And the ALAS Pro-KML project is also now available as open source, and you can find it on GitHub too. I could share the link if you want to specifically, but I think you should find it quite easily. Okay. The next question is, does one vessel one have one pair of origin and destination? Actually, no, it depends on the period you are studying, but if it's a long period, actually, if the vessel move between ports, we will identify all the different travels, and we will have different pairs of origin destinations. And we will really separate them, and we will be able to identify the time traveling and the average speeds and different indicators. Okay. The next question is divided in two, because it's too big, but I think I can show the bottom. Okay. It's now. Is the height of change data is listed, including the IMS or not always available to reliable? For example, velocity vector, speed and heading. Yeah. Yeah, it's part of the IAS data. There is a speed of our ground of the vessel. There is the edging and also the course of our ground, and it's part of IAS data. And we can also recompute it. In IAS, these data are quite reliable, even as there can be some problems, but we can also recompute it from the vessel locations to recompute the speed and to compare also if the sensors are really reliable. It can be used to it. Okay. I see that in chat, some people is sharing the GitHub and the tutorials. Nice. So we have next questions, new questions. Can we adapt the pipelines towards moving objects? Yes, absolutely. It's done for this. Actually, here we just used IAS data as an example to illustrate the process, but actually, any moving object, it's only an object ID, location, a timestamp. And with at least only this information, we can process it and we can recompute the dynamics of the vessel. We can apply filtering. We can process it. And what's nice is that as it's based on the Spark framework, we can deploy it on cluster to apply it to big amount of data. Okay. We have five minutes and three questions to... No, it's four minutes. So how representative is your data? Like what is the percentage of all data? Of all the data, actually, we can display all the data. Like we downloaded the IAS data from the Danish maritime authorities. And for example, there is one file for each day and we process it. Then after, we can display the entire process data. I'm not sure I understand the question. Did you look at the mobility DB for possible options? I didn't use it, but maybe there are kind of similar features that we can have also in mobility DB. Okay. How will you handle misconfigured timestamp in the IAS data? There are different... I'm not sure what is your point about misconfigured data. Actually most of IAS data timestamp follow a pattern. We can have different patterns if we find different timestamp patterns within the data. We can adapt it. And also, we have the notion when there are gaps in data, we handle it, we identify it. And for example, sometimes the timestamp is... like there is a kind of permutation between timestamp and there is one location which was emitted maybe two days ago that appeared quite easily. It's quite far and we identify it as outlier and we clean it. Okay. And the last question for finish, the presentation is what is the mean of ALAS? Actually ALAS is a maintain in the Priremia in South of France, close to the Spain border. And some of the founders of our company come from this area and that way they choose it. Okay, thank you, Willy. We... the time is up. We can cover all the time the presentation. Thanks for the presentation. Thanks for your question. Okay. If you have any questions to Willy, send an email. This is on the screen, the address, the mail of the Willy. So thanks and we'll see you soon in the social gathering in FOSSA GIS mapping of the Buenos Aires. Thank you. Thanks to you all. I will see you in the next one.
|
The maritime industry has become a major catalyst of globalisation. Political and economic actors meet various challenges regarding cargo shipping, fishing, and passenger transport. The Automatic Identification System (AIS) records and broadcasts the location of numerous vessels which supplies huge amounts of information that can be used to analyse fluxes and their behavior. However, the exploitation of these numerous messages requires tools adapted to Big Data. Acknowledgment of origin, destination, travel duration, and distance of each vessel can help transporters to manage their fleet and ports to analyse fluxes and track specific containers based on their previous locations. Thanks to the historical AIS messages provided by the Danish Maritime Authority and ARLAS PROC/ML, Gisaïa’s open-source and scalable processing platform based on Apache SPARK, we are able to apply our pipeline of processes and extract this information from the millions of AIS messages. We use a Hidden Markov Model (HMM) to identify when a vessel is still or moving and we create “courses”, embodying the travel of the vessel. Then we can derive the travel indicators. Authors and Affiliations – GAUTIER, Willi (1) Data science Department, Gisaïa, France, GAUDAN, Sylvain (2) Chief Technical Officer, Gisaïa, France FALQUIER, Sébastien (3), Data science Department, Gisaïa, France Track – Academic Topic – Academic Level – 2 - Basic. General basic knowledge is required. Language of the Presentation – English
|
10.5446/57278 (DOI)
|
Dr. Shea Strong is the Vice President of Analytics at ISI. She holds a PhD in astronomy from UT Austin. Previous to ISI, she was the Director of Data Science and Machine Learning at Eagle View, orchestrating a distributed international team of data scientists, engineers, and software developers to scalably extract features from geospatial aerial satellite and drone imagery using people learning computer vision. So really excited to have you on today. I'm taking care. Thank you so much, Rob. It's great to be here and virtually talk to all of you today. So I'm really excited to represent and to share some of the work that we're doing at ISI. I've recently joined ISI. I guess I'm coming up on my first year anniversary, actually. But the focus at ISI is predominantly natural catastrophe development using SAR imagery. But I'll dig into that a little bit more in a moment. My role at ISI is to build a multidisciplinary team that stretches from deep synthetic aperture radar experts all the way over to machine learning engineers and data scientists who can largely help with a lot of feature analysis and data extraction, of course. So just diving in with a couple more details. If you're not familiar with ISI, that's totally OK. It's a Finnish-based company. And it was the first company actually to miniaturize a small SAR satellite. So SAR, of course, synthetic aperture radar, I know many of you are experts. But just in case you're not aware of it. But it's a microwave-based microwave wavelength radar operating in the X-band, which is about 3-ish centimeters in wavelength. We have a single polarization. As of 2021, we had 14 satellites in the constellation. So earlier this year, we launched a total of 14. And I mentioned that these are small sets. They are less than 100 kilograms. So they're incredibly agile. In terms of being able to quickly change the perspective of where we're observing and where we're collecting data, they're very nimble. But also the advantage is that because they are relatively small, the cost to launch them is also relatively less. So there's an ability to create a constellation of persistent SAR satellites that can help monitor the Earth for various change, both man-made and natural, with little to no cost. And that I think opens up a lot of ability to explore the solution space and really identify areas that this technology can be useful for. And another thing that we have, one of the advantages of SAR is that fundamentally it's an active sensing instrument. So it doesn't rely on the solar radiation of the Earth to see things. And so we're not limited by day or night, nor are we limited by weather. So often the X-band radars can penetrate through clouds and smoke. And so you can start to imagine the advantage to this, specifically for natural catastrophe events that often involve weather or in the case of wildfires, haze or smoke and such. And the really cool thing about SAR is that if you can get these systems and if you can do coherent ground track repeats, so basically they're imaging the Earth at the exact same orientation every single day, you can do something that enables you to evaluate millimeter level change in the Earth's subsidence or just in general, like the addition or subtraction of things on the Earth at that millimeter level. And that's fundamentally because not only do you have the intensity of the light, the amplitude of the light, but you also have the phase of the light. And so from day to day to day, if you were imaging the same place on Earth at the same time with nothing else changing, that phase return, the wavelength that's returned back, the pulse would always be in phase with each other. And so as soon as something changes in elevation or is removed entirely, then that phase will change. And of course, that's where the change detection part comes in. And then I'm really keen with my team to work on natural catastrophe applications. And of course, we think of natural catastrophes as like these adverse events on the Earth that are very much driven by natural processes. But in the scale of our climate change crisis, of course, these natural processes are absolutely exacerbated by human activity. And so in the background, what I have here is an image. This is actually a mosaic. It's a multi-stacked mosaic image from ISI. The different colors represent different dates. So predominantly, it's red, blue, yellow, and any combination of those then represent the different changes. So if you see a distinct color, like blue or red, that's information that only happened on a specific day. If you see white, that's essentially like everything summed together, and there's nothing of interest there. And this was actually a flood in Japan not too long ago, maybe about half a year ago or so. And one of the cool things to see if you look down here, you see these blue flooded regions are regions that were very much inundated by some heavy typhoon-related rains from this flood, from this, sorry, the river breaching into these agricultural fields and populated areas. And the cool thing is on the red day, you can see regions of where the water actually flowed over and breached the banks of the river. And so we can start getting down to really understanding the cause and the time series domain of the activity that's happening. We're not hampered by clouds or time of day. And then maybe more just on the kind of man-made side, this is a very similar example where again the different colors represent different dates. And in this, this is around Rotterdam region. So you can definitely start to see some interesting port activity. There's various ships that come and go. So there's the ships that showed up on the yellow day versus the green day versus the blue day. There's a bunch of activity going on in terms of these shipping containers up here in the north. And then down here, which is pretty cool, these are oil wells here where they have essentially these floating tops that prevent evaporation. And you can kind of see how the lids on these floating tops show up in different depths over the course of time as through this observation indicating usage of this material. And so there's a lot of really interesting human activity aspects that when coupled with interests in natural catastrophe or disaster risk management become really valuable pieces of information. So I already mentioned a bit of this and I suspect it's probably too primary for a lot of this audience. But fundamentally, again, SAR is really lovely in the sense that it kind of liberates you from needing the sun to observe the earth. But it comes with a whole bunch of other caveats where it's a challenge to interpret the tooling can be somewhat limiting at times and is often very locked into academic or governmental tooling and capabilities. But there's also this cool advantage with SAR in particular where I saw how we're thinking about this problem in particular towards how can we address or how can we respond to natural catastrophes in a way that is beneficial? Like we don't just want to gather data and sell pixels. We're really interested in identifying the solutions and the opportunities to help governments and agencies to respond to such disasters and evaluate the impact it has on the population. And I think there's essentially these three different parts. I mean, there's an economic part of a natural catastrophe where, you know, how fast can you get the information? Can you cover the region? I mean, today, often what we see is we're working with insurance agencies or government agencies like in the US FEMA, for instance, you know, people are often deployed to an event on foot after an event. And so then there's a lot of kind of back projection of what exactly happened and where was the biggest impact. And so often that information, you know, just comes late or maybe is prone to uncertainty or bias. And then there's the societal impact in terms of, you know, one of the things that I love about geospatial analytics period is that fundamentally there's this very visceral, tangible, quantifiable verifiability in the sense of, you know, you can say something is true, you know, based on a location, you have a latitude, longitude of a point, you've said something about it. But in the end, somebody can go to that location and validate that point for themselves. So unlike building models or building analytical products in something like the financial sector, which is often, you know, maybe a bit black boxed to the majority of the world, there's a lot of opportunity to gain trust by providing information that is not, you know, being hidden or not inadvertently, you know, prone to bias or uncertainty. And then of course, on the environmental side, like, you know, SAR is great because of the fact like no longer are we, you know, limited by cloudy days, you know, we have the ability to penetrate through that atmosphere and at, you know, during the night, for instance. So we can be very responsive. And so when we come back to natural catastrophe, you know, we can, you know, we don't have to wait for the sun to come out to assess the impact of a situation. So there's a level of responsiveness. And then I think, you know, looking beyond like beyond a specific natural catastrophe event, but looking at as a culmination of those events, you know, we can start examining the specific details of what is that longer term impact and, you know, relative to climate change and how can we improve modeling and, you know, just general responsibility associated with that. So I just wanted to show with you some more dynamic pictures, which, you know, might be useful or might drive you crazy shortly with their repetitiveness. But, you know, there's this opportunity, of course, to differentiate and understand natural change versus the manmade forcing. And so kind of on the far side of the screen, we have the Fagradas-Viac volcano in Iceland that erupted earlier this year. And SAR was a beautiful example of being able to every single day at a very high resolution evaluate what is going on in this environment. And from a public safety and a public impact perspective, you know, where the lava flows are occurring or where some of these different volcanic fissures were opening is incredibly important for, you know, just from a public safety concern. And then in the middle, of course, I mentioned earlier the oil tanks and how, you know, in this animation, you can actually see the activity. So in terms of the human driven aspect of our changing environment and the repercussions thereof, you know, that is something that we can also closely monitor from this type of platform. And then another region where we're focused on our analytic side is very much assessing deforestation. So in particular, looking at Amazonian deforestation, but also expanding out to Nordic regions and European regions, you know, assessing, you know, there's kind of, of course, sanctioned removal of trees, but then there's also unsanctioned and understanding the rate and having the resolution to assess the critical area of the deforested regions is really imperative. And this latter component is one where we're very much focused and we've had a lot of success with leveraging deep learning, kind of recurrent neural network applications. So leveraging the time series domain of the SAR in conjunction with the deep stack of high resolution information. And another great example that is closely linked, of course, to climate change is glacier monitoring. And this was another stack of imagery that we consolidated earlier this year over the Muldrow Glacier in Alaska. So this is a glacier that I got a lot of news earlier in the year because of its extreme speed. So it's moving 100 times faster than your standard, your average glacier. And you know, this is relatively rare. And I think from a geological perspective, there's a lot of uncertainty as to the driving mechanism there. But then there is also the repercussions of what is this due to the environment? Are there people downstream that ultimately might be impacted by this? And so there's a lot of both interesting geophysical, geological aspects of this. But then, you know, going back to how does this impact, you know, people that are living potentially close by. But you know, these are great pre-pictures. The dynamics time series domain is a really cool feature of SAR. But fundamentally, these, I've just shown you pixels. Great. So, of course, what can we do with that? And then the other question, I think, is not only so much what can we do with it, but like how easy is it to do something? And I think that's the tricky part with SAR. So you know, all of these great features of SAR, the fact that we can kind of persistently collect coherent information on a daily basis at very high resolution and look at millimeter level elevation changes. Like sounds really great, but then the reality of it is that it's just a very heavy stack to process. So not only do we have the SAR processing and the SAR image reconstruction, because keep in mind, you know, SAR is, you know, it is radar. It's not an image per se. And so you actually have to take the radar and the interaction of the satellite as it's moving across the target and use that motion to reconstruct an image. So there's a lot of image reconstruction. And then you're left with this very heavy data set that has a tremendous dynamic range. And you're trying to preserve both the complex information and the amplitude information of the SAR signal. And then there's the added complication of co-registration. So you know, even day in and day out, I mean, satellites, of course, can vary in terms of, you know, exactly where they are in their orbit. I mean, it's small, but these things aren't necessarily as maybe fixed as you would think they are. And so even in terms of how you geolocate the information from the SAR sensor to the ground, of course, can dramatically vary from day to day. And so many algorithms, a lot of development that go into that, but it's again, another heavy process where you're taking many pixels and you're trying to stack them in a very consistent way. And then coherence, of course, like I mentioned previously, that's essentially, you know, being able to maintain that phase information so that you can do that differential elevation kind of analysis. And like really a lot of our goals stems from, you know, applying machine learning and deep learning algorithms to some of these different data sets. And I think this is inherently like one of the biggest challenges with SAR is that it's, you know, not a new technology by any means. I mean, it has been around well into the early 1950s or so. But I think the challenge is like it hasn't been open. I mean, there's, you know, I think there's still a developing community around creating accessible tools. And I think if you look at kind of computer vision, machine learning applications and deep learning applications, like there's a lot of other proxy industries and, you know, people are more comfortable taking images, you know, red, green, blue, and they are with taking radar images and doing something with it. So, you know, there's a challenge to both create new capabilities and also how do we even bootstrap some of these more traditional computer vision applications to these very heavy stacks with these dramatically large dynamic ranges and complex signals. And then every single SAR analysis is a three-dimensional problem. I mean, it's not just latitude-longitude, you know, to ultimately even get that pixel onto a point on the Earth, you have to convolve it with the underlying digital terrain map. And that, of course, can come with a slew of uncertainty and a slew of needs, but it creates that additional complexity when we're dealing with this kind of imagery. Of course, you know, preserving the time domain is super valuable here as well. You know, there's a lot of information, especially in the natural catastrophe space that's happening pretty quickly and being able to, you know, effectively use that and not throw that information away is one of the biggest challenges. And that kind of leads me to, like, you know, where we are as a team, you know, there's a lot of interesting discussions as to kind of analytics-ready data sets, ARDs. And I think where we're oscillating to as a team is more around, like, how can we create analytics-ready services, you know, with SAR being that there's, you know, kind of all these bullets that I've mentioned before, so many, you know, potential different requirements based on ultimately the, what are, what is the question you're trying to answer? Are you interested in how a building is flooded in a plus or minus 10 centimeter differential? Or are you more interested in just like a broad town or the impact of, you know, a much larger region? And based on what you're asking, that's really going to drive the need for you to have a very specific type of digital terrain model and will also dictate the amount of core registration, whether or not you're doing coherence analysis. So really to create services or to create solutions, you have to have kind of the efficient ability to create these analytics-ready services. But ultimately, the goal through all of this, that was a long, very maybe convoluted way of saying, like, we just really want to preserve the complex physics. We're trying to understand the domain and stay quantifiable, you know, preserve that ability for humans to agree or disagree with what we're saying. And then, you know, leverage the tooling effectively to make that happen. This is an example of one of our core products. So we're very much within the natural catastrophe space, other than kind of looking at some of those geological applications, we're looking at flood monitoring. And so this is this kind of multidisciplinary solution where we kind of go, you know, from one end, where the heck is it going to flood? And so we have a whole process around targeting and identifying locations that are susceptible or likely to be flooded in the next day or so or whatever the case is. And then the beauty of owning the constellation is that we can take that information and then task the constellation to collect information. And the hope is you get the flood peak so that you can really get the maximum depth in a specific region. And you know, I would be lying if I said it like we don't always get it right. Like, like it's definitely a challenge because again, you know, this is a time series event and often takes many different samples of that information. But once we have collected the imagery, then we use kind of a combination of maybe more traditional geospatial techniques relative to this, our imagery also combined with machine learning from a segmentation approach to get both extent of the flood and then depth within that plus or minus 10 to 20 centimeter uncertainty. And that is like really critical at the building level. So you can see kind of in this image, this, the blue color of course is a flood. This was for recently Hurricane Eda in the US in the north, kind of the aftermath when there was a tremendous amount of flooding up in the northeast, northern east coast of the US. But you can see the individual buildings here that are represented in terms of the impact of how much water they got so kind of high, medium or low. And then the color, the blue color here is just the depth of that particular flooded region. And so our goal is to always get this information out in 24 hours to the party of interest. I mean, in this case, we worked close with first responders, but other times it's insurance as well. And having that fidelity and that responsibility is pretty critical. But we've had to create a lot of tools that, that, you know, just simply because they're not, they're not always immediately ready. And so kind of going more a little bit on this, this question of like, you know, there's an obvious tooling need in synthetic aperture radar imagery analysis. And I think there's this tremendous potential for, you know, what I would maybe controversial, controversial, let me say, like we, there's an opportunity to decentralize a lot of information. You know, like I mentioned before, it's, you know, these are cheaper systems to launch than maybe your traditional optical sensors. And you know, you have just a lot more opportunity to capture information. So you're maybe, you know, you're not necessarily competing for the same kind of information that somebody would be repockets, you know, might have a priority over. And the other interesting thing is that my background was in more in the optical domain originally. And, you know, previously working on kind of machine learning pipelines to extract, like the kind of the conditions about structures and, and what is going on in specific locations. And we actually found like, even though we were creating very what we thought were unbiased models that are ingesting high resolution aerial or satellite imagery, you know, we found that often, you know, our models would perform the best on over over data collects that happened in very wealthy cities. And, and off, and that seemed to be like largely driven by the fact that the better resolution, the better processed imagery sometimes gets paid for by those deeper pockets or more often gets paid for by those deeper pockets. And so there was inadvertently like this socioeconomic bias in some of the modeling that we were seeing with optical imagery. And you know, definitely, you know, it's something that that can be counterbalanced with just a higher variability of better quality data. But I think there is fundamentally maybe a limitation as to, you know, where the highest quality information lives, and largely again driven by the cost of acquisition. And so, you know, SAR is cheaper and then can be more democratically leveraged for collection. That doesn't mean we're using it as effectively as we could be using it. We have started working with the European Space Agency, FILAB, to start open sourcing a lot of machine learning capabilities for SAR. And you know, we've started on the very like maybe naive side of just like data handling and data cube creation, but then trying to take that all the way to like how do you integrate some of these heavy data cubes into a machine learning pipeline leveraging PyTorch, for instance. And I did want to point out like there are a lot of great communities that are doing quite a bit of tooling between EO college, you know, wealth of other GitHub repositories. But still in general, I think the tooling is still early. And I think there's a great opportunity to kind of unify a lot of capabilities and just, you know, improve some of the accessibility so that we can start getting to the really interesting solution development. And you know, as part of that, like I had mentioned before, because SAR is so unique in terms of, you know, some of the parameters that you're dealing with, you know, you have the time domain, you have, you know, a lot of material properties that are now accessible to you that weren't in the optical, you know, you have really particular components related to the acquisition geometry of these, of these images. And you want to preserve also that complex signal. So all those things, you know, make it a challenge. But that's also the really exciting part of where we are with, you know, building more SAR capabilities. And just, you know, I wanted to show you a little bit. So IceCube is our ISI machine learning cube data structure that we're working with. We've been creating some, and it's all open source on GitHub. We've been creating some notebooks on how to use that. And we're open sourcing several ISI stacks to start playing with. And so this is the architecture that we use for our flood analysis and then also for some of the deforestation workflows that the team has been focused on. And then just my last besides, I wanted to share that, like, we do have our, we have 18,000 or plus archives that are public, publicly available, just on the website. So that's something that, you know, if you're interested in playing around with some of the ISI images for your own applications, you know, please, please have a look at that. And then we have been partnering with ESA to, as part of their third party mission, to make available for free for researchers, you know, all modes of our imagery. So from the very high resolution, the point 25 meters all the way to 15 meter mode, you know, and like, it requires a proposal, but in general, you know, both new collections and archives are free. So that's just another, another way to start getting your hands around some of this additional data sites. And I think that might be it. So thank you so much. I really appreciate the opportunity and Rob, thanks for organizing such a great session. Oh, awesome. Thanks. Thanks so much, Shay. Really, really great work. Been a fan of your work for a while and it's just, you didn't see me, but I was snapping through a lot of that. So, super cool. I really like the point about ARD versus like analysis ready services, you know, how are we, you can't necessarily build up static data sets for all the derived products. So what are the types of services that can leverage the dynamic processing? I think it's a really important point. Maybe that needs to be made at the ARD conference next month. So there's a number of questions. One is, what is the most surprising or unexpected use of ISI data by a user? Oh, well, we did get an interesting request. If we could calibrate off of a sculpture in the middle of a town in Italy, like if we could use this sculpture as a retro reflector to calibrate the imagery against. And it was kind of this campaign around like both incorporating art and science. You know, we haven't done it yet, but that was kind of a weird, but cool opportunity for sure. Yeah, I would say a lot of them are kind of, I think the needs are common, you know, the quick response infrastructure, dams breaking, volcanoes erupting, like, you know, there's kind of a clutter of very consistent requests in general, though. Totally. And one interesting one that I don't understand too much, but yeah, I'm sure you will, with 14 satellites so far, is the phase return still on the horizon, or is that something which might be done soon for specific uses? Yeah, yeah, no, absolutely. I mean, I think, you know, we're still working on getting these coherent baselines. So there's this whole world of SAR of, you know, trying to not only put various satellites into a coherent repeat ground track, but then keep them there. And, you know, we've been fundamentally like working on a lot of that, but in terms of the phase returns, the team is working on a lot of new capability there. So hopefully, it'll be something that will be more broadly available soon, but still a bit early in the development. Awesome. And so what's the weirdest radar artifact which you can describe? I mean, weirdest, I guess I'm going to be truthful and tell you that I'm pretty new to radars, so I'm sure people have seen a lot weirder things than I have seen. But some of the things that I find particularly interesting that I've never thought about for from an optical perspective was, you know, how you get these range anomalies, where essentially you get, you know, because you have this active system, I'm going to do a shitty job explaining this, but you have this active system that's sending out a pulse of electromagnetic, you know, expand wavelength to a site, and then it's collecting it. You know, sometimes you get a pulse where, you know, you send out a pulse and then you collect it again when you're collecting over another region. So you essentially wind up with these weird echoes that get geolocated. So you'll have things like part of a city that's imid, that suddenly appears in the middle of an ocean just because of the way that the information bounced back from the city to the satellite, and the time it took for the satellite to collect it, the satellite was already collecting information over an ocean. And so you get a lot of these things and it opens up a whole world for some really interesting machine learning, you know, like correction as part of the processing steps. Whoa, so cool. So cool. What resolution and horizontal vertical accuracy can you get with SAR? I mean, I think you can go, well, I don't know what the theoretical limits are. I know like our systems are 0.25 meters is the best resolution that we have at the moment. You know, I think there's oddly a really interesting political dialogue here where I think there is limits that you can, you know, that you're allowed to publish. So I think we probably have not actually explored the full resolution capability in that domain, but just from a political aspect, there's kind of, you know, the enforced limits on resolution. So that didn't really answer your question, but that's where we are today. I think it did. I think it did. Awesome. And yeah, so the last point, there's a couple of folks that would like to use ISI for disaster insurance claims and then are interested in like pricing and all that information. So if you want to drop some contact information in the chat, I'm sure they would love to follow up with you. Yeah, absolutely. I'll definitely share some contacts. Cool. Well, thank you, Gunshay. I really appreciate it. And yeah, have a great rest of your time. My pleasure. Thank you. Take care. Thank you.
|
Climate change has immediate and observable impacts on Earth. The increase in natural catastrophes and our lack of community and global readiness is apparent. Commercial, affordable SAR constellations will result in the decentralization of Earth observation, improving natural catastrophe responsiveness, resilience, and broader community engagement. SAR sensors, with the ability to observe the Earth in ways entirely inaccessible by optical and infrared sensors, present a unique capacity to quantify catastrophic impact and plan for future improvement. But SAR imagery alone is not enough: the high resolution, coherence, and frequency of revisit are only as good as the tooling shared to facilitate insights and actions. Many tools are currently available, but can also be a challenge to scale with modern cloud resources as they are often developed by scientists for scientists or locked in classified government environments. What types of open source tools are required for sustainable solution development with SAR and how can we improve existing open source contributions as we exponentially collect imagery? How can we liberate knowledge often owned only by SAR experts? In this talk, we will discuss natural catastrophe solution development at ICEYE with high temporal and spatial resolution SAR and how we overcome the challenges through partnerships such as with ESA Phi-Lab and the ESA Third Party Mission (TPM) program. Authors and Affiliations – Strong, Shay ICEYE, Finland Track – Use cases & applications Topic – Data collection, data sharing, data science, open data, big data, data exploitation platforms Level – 1 - Principiants. No required specific knowledge is needed. Language of the Presentation – English
|
10.5446/57279 (DOI)
|
Welcome to my talk, News from Actinia. I will present the news that happened in the last two years with an Actinia because last year unfortunately the phosphorytics couldn't take place. My name is Carmen and I work for Mundialis which is located in Bonn. It was founded in 2015 and focuses on the processing of large EO data. I started there already in 2015 so I took part in the history quite a long time already. What we will see in this talk is the previous talk. Quick summary of it which I did in the phosphorytics 2019 and Bucharest for those of you who don't know Actinia yet because it just became an OSU community project in 2019. I will show you some new features and also features in related repositories and then I will give a short overview of Actinia and different projects. First the greatest news, we managed to release Actinia 100 just in May this year and it was possible because we did an Actinia code sprint for some days and did quite a lot of work there. Mostly basic work and not many new features but more structural work. So let's see what Actinia is and what the core concepts are. So this to understand we need to understand the core concepts of Krastiis. I won't explain what it is because I guess you know but two concepts are very fundamental. One is the Krastiis database which is storage for EO data and within it there are different locations. Each location has a different EPSG code so the Krastiis database is a little bit divided by it. Within each location we can have different MAP sets which we say are the projects which we use and within these MAP sets are the real EO data like Rasta data, Vector data and space time data sets. Another fundamental concept are the Krastiis modules which are quite a lot. The first letter always indicates the family of it like Vector data, Rasta data and imagery data modules for processing and there are more than 500 modules so really a lot and this is only in the Krastiis core and there are also many add-ons available. So what would need to be done in Actinia for it to make Krastiis available as a REST API? One thing is that we need to manage the locations and MAP sets and geodata as resources. Another thing is to enable the use of the Krastiis modules and we need user management to map different maps to different users so everyone can have its own maps but also shared maps and also limitations and pixels because processing can become quite heavy and this was done by having two Krastiis databases actually. One is permanent and global and read-only and each of the Actinia Krastiis containers you see to the right in the middle amount this database and have access to it and then there's the second one, the user database which depends on the user so whenever a user calls a process then this certain user database is mounted so that it doesn't get in conflict with all the other users and because it's mounted each container can access the data but it uses the same data pool and for the containers to communicate we use the RADIS database and for example if in one Actinia container a process is running and the user wants to know the current status then it will send a request to Actinia and wouldn't know in which container it will get the response from and just any container can then look up the status of the process and the RADIS database and this whole thing is packed into Docker and it can be deployed into the cloud environment. We have experience with Docker swarm and OpenShift and Kubernetes and Actinia is running there successfully and we also use Terraform to start a VM and have Actinia running there. How does it look like in practice? We can get the locations and as a response we get a JSON file with all the locations listed the same goes for our mapsets so we have a list of all mapsets which are available and then we have raster layers and render endpoints even so we can for example request which raster layers are contained in a certain mapset and we can even render maps as a result which is quite handy for processing if we just want to have a quick preview what was processed. So the last two slides took a little bit of contact to the first concept of crasciiis and this slides manages to include the crasciiis modules and the Actinia concept. There are two kinds of processing ephemera and persistent processing and with ephemera processing it is needed to specify an output format for example GeoTIF or raster data and then the results can be downloaded after processing. The other way is the persistent processing and their mapset needs to be specified and the result will not be exported from crasciiis but still stays in the cras database and can then be used for further processing. Okay now let's see what features happened in the last two years. First we take a look at two cool features which provide one functionality combined. It's the storage of interim results and the job resumption and the storage of interim results enables the storage after each process step. A process chain for Actinia is built up of different steps which are most likely different crasciiis modules and after each successful step the mapset is saved so it can be used later. The second feature for it to make really sense is the job resumption. For example when a job fails then it can be started again and the calculation will resume where it left off before. This is especially very handy when we have a large jobs which run for multiple days and after maybe three days some error happens and then we don't need to restart it again but can just use what we calculated before and continue. This feature also allows to see the different logs for each iteration which runs. So for example if it runs into an error after three days again then the two results can be compared and then we can see if it maybe is really a bug and we need to fix it somehow. Another huge feature actually too are related to the space time raster data sets and even though they are really different they are actually the same. Just one is for ephemeral processing and the other one is for persistent processing. For ephemeral processing we can now export space time raster data sets that was not possible before and we can use this by setting the type to sd-ids which means space time raster data set and then we get multiple geotips as downloaded. And the second feature is for persistent processing and this was not included before because on processing Actina runs in a temporary raster database which then needs to be merged back once the job is finished and because the structure of the space time raster data set is a little bit more complicated than for vector raster data it was not implemented before because the reference to the raster data contains the map set name and with this feature it is possible now. It still has some limitations because of a Custia's concept that it's not allowed to have a space time raster data set with raster data from different mapsets or you need to be in the mapset to calculate it but we're still working on this. Another cool feature is the monitoring of mapsets sizes. There are different endpoints available and during calculation it is now possible to see the size in bytes of the mapset because it can become quite large during processing and there are also endpoints available to render these sizes and to run differences on them to only get the maximum size as a result which is also very useful if we have like a test process and need to estimate the storage which needs to be mounted for Actina so it can run successfully. Then we have a little feature which is also very handy. It's the version output and it's enhanced here. Or you could only see the version of Actina and the plug ins and now you can see the versions of the plug ins as well, the Python version and for us the most useful the Custia's version and you can not only see the version of Custia's but also the revision which means it's the Komet hash so we really know in each installed Actina which Custia's is used there. And another cool feature, it was actually one of the reasons why we released the 100 version because it's in breaking change and it enables the upload of local GeoTiff and it was a breaking change because before the endpoint existed already but it was used to create a new Raster layer by AirMapset and now it's possible to upload local data which was more useful for us so we changed this endpoint. And then I just gave a quick overview what all cool features we implemented and these are more little features, enhancements, documentation improvements, linting improvements but one which I want to highlight is the Helm chart. We developed a Helm chart and it's on GitHub available and it can be used for Kubernetes and most likely also for OpenShift but we didn't test it for OpenShift yet. Then we also have sad news, we reached the end of life of Actina GDI, the one that is available on GitHub. We're still using it in some projects, the project specific adjustments but the GitHub Actina GDI now became the Actina module plugin and the Actina metadata plugin. So now we have an overview of what happened in Actina Core and now I will give an overview a short summary of what happened in some related repositories. One is the OpenEO API and we developed the OpenEO Croskist driver which is the translator so to say for Actina into the OpenEO API interface and here you can see the OpenEO Web Editor and it loads collections and processes and what we did is to enable the usage of Croskist modules because OpenEO already specifies a number of processes which are predefined so it can be used in every backend but we also wanted to use the whole functionality of Croskist.js so we specified additional processes and we use the interface description of Croskist.js for this and you can see the input and output and types of it and so the Croskist.js modules can be used. How we achieved this was to develop the self description in the Actina module plugin which was before Actina GDI and this allows us to get all first year S and Actina modules via an HTTP endpoint and because I said already there are more than 500, there are filters available to not be overwhelmed by all the modules but to pick which fits and to the right you see a JSON file of an output of one third Crosk module or process and this is conformed to the OpenEO API with one exception. At the bottom we have an area of resources and with OpenEO it's only one output allowed but because the Croskist.js modules have multiple outputs we decided to break it here and the OpenEO Croskist driver will then create multiple CIDO modules for each output. We also added a template management and the Actina module plugin to the left you can see a normal process chain which can be sent to Actina to do the processing and here it consists of two modules G-region and S-for-perspect and this process chain can only be used for the elevation map in a certain map set and to make it reusable we implemented the concept of process chain templates and to the right you can see the same process chain but there is elevation map as a template variable and outside of it there is an ID and description of the template itself. This template can be stored in Actina with the HttpPost request we implemented a whole create read update and delete management of these process chain templates and with post they can be created with get they can be retrieved again and they can be updated and deleted as well but one more cool feature is that they are not also readable with the Httpget endpoint for the templates but now they also appear in the modules endpoint which I explained before and I said the module endpoints returns CASTGIS modules and Actina modules and you might have wondered what Actina modules are and now you know they are basically just process chain templates and in the modules it looks like this as you see in the JSON below and for the parameters you only have the placeholder elevation map which was defined in the process template before and the description is passed for the first appearance in the template with the CASTGIS interface description. Okay and the best thing is that we can really use it for processing. Before first we saw the process chain JSON which was quite long and now we have this JSON which calls our Actina module and only needs to set the elevation map as parameter. It might not appear huge but you can imagine for projects where we do a lot of processing the process chains can get quite large. I use this one because it still fits on the screen but we have process chains which are really really huge and this is very helpful so it's really visible what needs to be changed. And then at the bottom I just show that the JSON can be posted to a processing endpoint and then it can be retrieved as a map and we can see it with a ephemeral processing. Okay, now you get an overview of the new features in Actina and also the features of related repositories. I still have one topic which I want to show you which is the usage of Actina in projects. I picked three projects because they are really different. The first one is FTTH which means Fiber to the Home by Deutsche Telekom and we develop customized processes. One example is the calculation of potential trenches to see whether trenches of the fiber might be digged. And in this example there are multiple components involved. FME is started then it will inform STEEP. STEEP will start a VM and inform Actina GDI. There Actina GDI is still alive in this project because it has multiple tasks also to add the status of the process into the database and Actina GDI will then reach Actina Core which was started by STEEP on a VM to do the real processing. Another example is LOOSE. It's a technology project and the components showed there are as well as with other two projects you see here just a small pick of the things which are maybe the most related to Actina. Of course there are much more components involved in this project. With LOOSE we also do processing of course with Crascus GIS and Actina and here we have also a stack catalog of data and the processes are started in multiple ways. One example is to start it via the OpenEO web editor which you already saw before which then starts a job with the OpenEO Crascus driver which translated into a process chain for Actina which then does the processing. In one last example is CERMOSSA. It's a project together with Terestes and it's different as well because we have a web client with React.io which is a combination of React.js and OpenLayers and Shogun and the job is started by the web client, by the user with a dedicated interface for certain processes and then the job is running in Actina and when the job is finished a layer in user is published which is then back shown into the React.io web client. And last but not least the outlook. We have coming up on Saturday an OS.io code sprint where we take part with Actina. If you're interested please join us. And the things we want to tackle in the future is the stack integration and also to use the Actina authentication and remove it from the core to make it available via other software for example KeyClock. And to the right you can see other related projects and they are linked. You can see them all on GitHub. And that's it for now. Thank you very much and do you have any questions? All right. So thank you Carmen for the presentation. I think you're muted. Yes. You're welcome. Thanks and we have basically one question from the chat from somebody who is new to Actina. Do you implement the OTC APIs or is the REST API unique to Actina? Yes we thought about it and it's still somewhere in the background but we focused on the Open EO API first because it's kind of a standard as well. Actina itself is unique. Yes it's not a standard but currently Open EO makes sense for us so we developed Open EO for the translation. Okay. Thank you. We're getting more questions. This is the next one. Have you played around with Kubernetes, jobs, crown jobs to run Actina jobs? Not yet but yeah we're thinking of developing something to distribute the jobs a bit smarter as it is done currently so this would be a way to go definitely yes. Okay. Thanks. One question from me. How do you plan to support stack? Is this going to be like an output or are you using it as an input or are you going to support the API of stack? What is the plan for the code? Two approaches. One is that we want to include existing stack catalogs and kind of harvesting them so they are referenced and Actina can be queried and they are returned and then on the other hand they can be used for processing. So Actina will get the data, retrieve the data and put it in a way that CrestJS can calculate it and then it's passed to CrestJS for the calculations. Excellent. Thank you. Thank you. We don't have any more questions from the chat so I would like to thank you again for joining. Thank you for organizing and managing. Thanks. So we are going to be back in about five minutes with the next presentation. Stay tuned. Great sebelas
|
„Hello, my name is actinia. Some of you might know me already. I became an OSGeo Community Project in 2019 and my first appearance on a FOSS4G conference was in 2018 where I was presented in a talk. For those of you who do not know me yet - I have been developed to exploit GRASS GIS functionality via an HTTPS REST API with which GRASS locations, mapsets and spatio-temporal data are available as resources to allow their management and visualization. I was designed to follow the purpose of bringing algorithms to cloud geodata, the daily growing big geodata pools in mind. I can be installed in a cloud environment, helping to prepare, analyse and provide a large amount of geoinformation. But also for those who do know me already - do you know the details about what happened the last 2 years? A lot! Usage of interim results, helm chart, enhanced exporter, monitoring of mapset size, QA enhancements and a split of my plugins including module self-description of more than 500 modules are some key words to just name a few. With the ongoing development of the openeo-grassgis-driver, you can talk to me either in my native language or via openEO API. I would also like to tell you some interesting facts about me interacting in different projects. So come on over!“ „Hello, my name is actinia. Some of you might know me already. I became an OSGeo Community Project in 2019 and my first appearance on a FOSS4G conference was in 2018 where I was presented in a talk. For those of you who do not know me yet - I have been developed to exploit GRASS GIS functionality via an HTTPS REST API with which GRASS locations, mapsets and spatio-temporal data are available as resources to allow their management and visualization. I was designed to follow the purpose of bringing algorithms to cloud geodata, the daily growing big geodata pools in mind. I can be installed in a cloud environment, helping to prepare, analyse and provide a large amount of geoinformation. But also for those who do know me already - do you know the details about what happened the last 2 years? A lot! Usage of interim results, helm chart, enhanced exporter, monitoring of mapset size, QA enhancements and a split of my plugins including module self-description of more than 500 modules are some key words to just name a few. With the ongoing development of the openeo-grassgis-driver, you can talk to me either in my native language or via openEO API. I would also like to tell you some interesting facts about me interacting in different projects. So come on over!“ Authors and Affiliations – Tawalika, Carmen (1), Neteler, Markus (1), Weinmann, Anika (1), Riembauer, Guido (1) (1) mundialis GmbH & Co. KG, Bonn, Germany (https://www.mundialis.de/) Track – Software Topic – Software status / state of the art Level – 1 - Principiants. No required specific knowledge is needed. Language of the Presentation – English
|
10.5446/57280 (DOI)
|
I'm hearing several sounds. Sorry for this. Yeah, welcome everyone. This morning's session and we'll look in real world application and usage of maybe OSGeo software. And without further ado, our first speaker is Giovanni Allegri. He's from the beautiful Tuscany, Italy. He works at Geosolutions Group as an architect and product manager and he's known under his username Geohappy. And he's in the GeoNode project steering, I like the username. GeoNode project steering committee and a long time contributor and GeoNode, by the way, is a, or I should say the open source geospatial content management system. And Giovanni will take us through a gallery of projects and use cases to showcase the versatility and effectiveness of GeoNode. So Giovanni, the floor is yours. Okay, thanks, Joss. Hello everybody. And yep, the introduction was done by Joss. So I'm the product manager in Geosolutions for GeoNode. So in this presentation, I would like to share some cases where GeoNode has been customized, adapted and bent to specific requirements. So you know, GeoNode, well, this is our company, which is a core maintainer of GeoNode, beyond my store, just ever and so on. We are working hard. We have quite a big team on GeoNode. And so our experience in GeoNode has been wide in these five, six years we've been working with it. As anybody, we started with one GeoNode, which is the one that you find in the releases that you find inside the main repository. So the one GeoNode is a so-called vanilla GeoNode, which is the core of GeoNode, which has everything that is needed. It's the Django application that provides all the functionalities and services for GeoNode. I think, well, I will go quickly through the features of GeoNode. I think many of you already know it, so I won't spend much time on these. But it's a content management system for just partial datasets, including some documents, media assets that let you compose applications on top of this data. But the core is sharing, collecting and using these data in many different ways. It's open source, yes, it's open source license. And well, it's a well-established platform with a mature set of libraries where GeoNode uses lots of the GeoNode and open source, your special software. It's built on top of Django, so it's Python application. And this is a platform that is targeting many use cases, both for users, administrators, but also developers. So developers are welcome into GeoNode because it can be easily extended. From the beginning, GeoNode was designed to be as open as possible to additions, extensions, customizations. This is provided by the Django platform itself. I mean, it's a framework with a lot of facilities to extend and integrate additional functionalities. And GeoNode inherits all these capabilities from Django. Currently, Django is a suite of services. This is just the main ones. So it's a Django application which provides by itself a client which is built on MapStore, which is a framework for potential solutions, and React. MapStore is built on top of React. Behind the scenes, we have just server which provides the OJC services on top of Django. We have PyCSW, and we have PostGIS and PostgreSQL and the GDAL libraries for all the processing behind the scenes. So there's in general, when we deploy it, we have other services too, but mainly these are the main components. So in a very quick overview of the workflow, we have files, remote services, remote files, which is a work in progress, but it's in a good shape already. So any kind of resource, be it a document or a special layer, can be imported or uploaded into GeoNode, either as a local resource. So you physically copy the data into GeoNode, and it will be stored as a file system stored or PostGIS database data. Or in case of remote services, you have situations where the data is referring to the remote resource. So with the local data, the data set and the documents are ready to be used to build applications on top of them, or to be used, it's not mentioned here, but of course, when the data inside GeoNode are ready to be used as OJC services directly, CSW services. So the catalog already provides the data sets and all the metadata for the data sets. But you can go on and build other applications on top of these base resources, these resources, which are maps, GeoStories, dashboards, and whatever other GeoApp. GeoStories and dashboards are GeoApps. So this is a concept which is generic in GeoNode. So any other module can implement a specific job. So GeoStories and dashboards, for example, are GeoApps implementations provided by the so-called MapStore client application, which is the one that provides also the new client, the client to GeoNode. It provides the client, the default client, the front-end application, but it also provides these implementations for two kinds of GeoApps. GeoStories and dashboards are already available in the 3.3x branch, the development branch of the series 3 of GeoNode. So at the end, you can use this data, both the the resources and the applications built on top of the resources as OJC services through Arrest API, or you can embed because all these resources provide a view, a way to be viewed inside third-party HTML application to HTML pages, so it has embedded viewers. So as you see, beyond being a management platform, so where you can manage permissions, you can manage users, people, you can manage workflows, publishing workflows, authorize, approval workflows. In the end, for the end users or the developers, it's a platform that can provide all the kind of services that you might need from a third-party app. This is the core of GeoNode, but on top of this core, on top of these capabilities, often we need to build custom applications. And we have a large experience on this. GeoNode by itself cannot address all use cases, of course. So the general approach is to wrap GeoNode inside the so-called GeoNode project. So a GeoNode project is a container, is a Django project, which contains GeoNode as an app. We have the so-called the GeoNode project repository, which is a template, a GeoNode project. With a command coming from Django, you can materialize this project and so build, generate a project like, you know, if you are from React, you know, you have generators for create React app and so on. So it's a generator, okay, we have a generator that creates a project and with the right domain, the name of the project and so on. So it's a custom Django app. With this app, you can follow all the patterns provided by Django. So you can use the overloading of templates, you can make extensions to the modus, you can even include apps that are your own apps that either interact or not with GeoNode. So you can create your Django project bundled with GeoNode itself. And we did it several times. I mean, to say the truth, the GeoNode project is the way we deploy GeoNode always. So we never deploy GeoNode vanilla, we always deploy a GeoNode project instance. Even the demo that you can see online, the stable demo, the development demo, they are GeoNode projects. So from the past, we have, you see, these are old projects, but I mean, it's a quick overview of some examples of the customizations. You will see here, you have a view that is not from GeoNode. We have buttons, actions, tabs, views that are not provided by GeoNode, they are provided by the custom application. And but they are integrated both functionally and visually integrated into GeoNode. So you see in the top and the bottom, you see this, it's a GeoNode. For UNESCO, we started the work that is being the first test to review the homepage for GeoNode and all layouts of GeoNode. So you see a grid system, a grid of cards in the homepage. And this is what you will see in the master branch of GeoNode now. So also in this case, this is a GeoNode 3, but with a custom homepage and with a custom react application that drives this homepage. But all the rest is other templates and the Chang of views that you know from GeoNode. In this case, we completely changed also the UI, so you cannot say that it's GeoNode if you don't know it. But behind the scenes, it's GeoNode. So only the back end is GeoNode and the front end is completely custom. And for this case, for GFDRR, this is a client built on top of Map Store client, but with the custom table of contents, custom explorers of data and the histogram charts. So as you see, I mean, with the infrastructure provided by the Django framework, you have the freedom to do whatever you want, I mean, extending, replacing, back end and front end. Okay, and just keep what you need in the end. This is another example where we have also changed the filtering on the side, the facets, filtering in a custom template. So we have added filtering tabs that are not available in a standard GeoNode. And with new menus on the top, some of these customizations are available from the Django admin. So you can change the logo, you can change the main color of the web of GeoNode directly from the Django administration. So without having to change the templates or the source code, of course, these are just the basic changes that you can do. If you want to do more, we will see later there are more advanced ways to do that. Next to platform is the current, I mean, it's a client for us, a GF solutions client. And this has given us the opportunity to start the GeoNode master branch. So what will become the four series of GeoNode? And it's a complete refactoring of the front end and of some important pieces and components of the back end. The storage system, the resource management system, the remote services system. So all these services have been completely refactored. The UI has been optimized a lot. It's a mix of single page application and Django templates. I will show you briefly. And we have reduced a lot the number of clicks to perform the usual actions on resources. You have quick previews of resources without having to dig into the hierarchy of resources. So bring your own GeoNode. So that's what I was saying. You have a GeoNode project. You have a GeoNode Django app. You can put any app that you want behind. And then this is a MapStore client. The MapStore client is exactly an app inside the Django project, which provides the client, the MapStore front end client, which is the default client for GeoNode now. It provides its own REST API. And this is used, for example, for the dashboards and GeoStories resources that I was showing before. And it interacts with GeoNode. So you see that if you want to change the back end of the front end, you can just take away MapStore and redo it as you prefer. I mean, you have everything to manage GeoNode through REST API and so on. So from the point of view of customizing GeoNode, what are the objectives for the next major versions? We want to improve the uploaded storage. And this is a work in progress. Resource cloning, meaning versioning of resources, is already done. We have abstract map, big engine. So GeoNode, just ever is an abstract element now. We have a harvester and extended permission systems and so on. But what I really want to say now is that you have a new SP client, custom templates, support and an extended REST API. The REST API v2 is almost complete and it really gives you the ability to interact however you want with GeoNode, both for the, for viewing resources, searching resources, but also managing GeoNode itself, if you have, of course, the credentials to do that. And most of these is already available to test the master branch. So yeah, it's in a good shape. But we have a brand new front end with the React, with everything, all the tool set coming from Map Store. The home page, SP Refactor, we have a single page with infinite scrolling, which works also as a workspace for users where you can manage your own resources. We have a single view of the resource. So without, it's an editor, which is the viewer, full screen viewer of any resource, just partial or document or whatever. And it's also an editor if you have the permissions. And the back office is, well, we are, for the moment, we are reusing the legacy pages, even though integrated, visually integrated with the new styling, the new front end style. And we are, I mean, during the next months and next years, we will probably transition all the back office to the new front end. One of the things that have been discussed with the community was the ability to maintain the ability for developers to use Django templates, because not everybody is able to work with React or wants to work with React. Oh, maybe they already have a lot of custom applications and it's not feasible to port these applications to a JavaScript client, a React client. So we did, we reworked the architecture of the front end so that now you can mix React clients, your own React components with standard old Django apps. So there's a template, a base template that is wrapping both the React client and the old Django templates here. So you have the best of the two worlds. Even it's a bit strange. I mean, someone doesn't like these kind of hybrid approaches, but this is the best of the two worlds in this moment. So we can support any scenario and any context. So if you don't want to use React client, you can go on with the standard Django templates. No problem. We have broken the standard, the Django templates into snippets, very granular snippets, so that you can change the single pieces of the templates and you can work on the styling of various in a granular way. So you can just override this custom or provide your custom theme and you can change any single piece of these snippets. You can also the configuration of the client has been made overriding. So you have many ways to provide this configuration from static JavaScript, template, settings, whatever, and everything is merged dynamically. So you can have many channels to configure your application or provide custom configurations to your application. And this is an example. I won't dig into the details, but I mean, the documentation will tell you all of this. So the future, well, we will work on enforcing much more the privacy and provide the privacy API to custom applications in a better, simpler way. And also we have the ability to provide dynamically map store plugins inside the client, so without having to bundling custom plugins into the map store bundle. So you can create plugins. We have an example. I will show you here. I will go fast forward because I see that time is passing. This is a custom map store plugin for meteorology of data. And this has been put into the application without having to rebundle the map store client. So it's just a simple plugin, an extension. It uses a map store extension concept to bundle your own plugin. And the other thing that I was saying, yes, other big changes, the dataset style editor, but I think it, I mean, I can skip these. I mean, it's more about new ways to style your resources in an efficient way. I don't know if I have more time. I mean, I have really gone fast over the presentation, but I don't know. Yeah. Okay. Okay. I will go to the end. So the future, this is not a work in progress right now. We want to extend the permission system for styles and data exports, because this is really important to provide security on top also of the styles and the data exports, because I mean, every context has its policies for on these things. And this is not strong enough. I mean, you have a permission system, but it's quite hidden. So you don't have the means to control it easily. Completely support for non default, just partial, worst basis on just ever. So you can compartmentize, you can already do it, but we are improving that to the compartment compartmentization of data on the back end services. And then also we are planning to work on a new roles management on just ever, which can pool the rules from external services, so an authorization service that can provide the rules to just ever services. So improve also the integration inside multi services infrastructure with custom policies. So you just have to provide the permissions for Geofence and just ever as they understand them, but the way you build these rules is up to you. So you have the freedom to even beyond what John O'Donnell does. Yeah, we have data sharding, we have back end services. This is what I was saying before, users and group partitioning. We already have all these concepts, but in most complex scenarios, if in particular what someone calls the multi tenancy scenarios, maybe John O'Donnell is not ready to be used in a multi tenancy environment or to provide multi tenancy services. You can get there with some effort, you can more or less have it, but it's probably not the right way to do that. So probably the best way is to deploy multiple genode nodes and make them share with single sign on, shared authentication, authorization services, share just ever with data sharding and so on. And maybe you get better results. So by the way, many of these improvements are only ideas for the moment, because I mean of course they are ideas that are coming from our experience on deploying genode, even in complex and enterprise level scenarios. And so we have the pieces, even if they're not built in right to be used in genode. Of course, these customization, the way you deploy the customizations that you want to deploy, including secrets, including customizations to the services, I mean services like the salary service, which is the broker of the task manager for the synchronous task manager, all the way you want to expose just ever through NGINX. You might want to change this. So this is doable because the genode project provides all the Docker files and the Docker images that you can tweak to your special needs. And that's what we do. We leverage in our deployment custom Docker overrides and Docker images. This is the way to go and this is the way we are improving. Okay, thank you Giovanni. Yeah, for the sake of time keeping and there's some questions still. Thank you very much. I'm very impressed each year with how genode progresses. I see there are at least two questions. We have two minutes so that you can do so here I put it on screen and I read it for listeners is due server always required for genode deployment. Can you also use map server instead of server? At the moment, yes, I mean, just every is the only one. But the architecture is open to other back-end implementations but they must be implemented. Okay, because you recently switched from let's say to your network to PyCSW for instance? I understood. Yeah, also geo network was removed. The support for geo network was removed from the master branch because there wasn't enough maintenance resources for it. So by the way, in both cases the architecture still supports implementing custom back-ends for the server. Okay, that's good to hear. Let's have one last question. We've heard about geo storage but can it also be created via the geo node REST API? Yes, in particular not the genode REST API but the map store client REST APIs. So the map store client as I was showing before provides the REST APIs for this particular kind of geo app. So by the way, it's a very thin, very thin REST API. So I would say yes, I mean, the genode as you have it with the genode with map store client and so on provides the REST API to build geo storage and the dashboards. Okay, thank you very much. Yeah, we have to move on. Well, thanks again, Giovanni and I hope more people get inspired now with using geo nodes. Thanks.
|
GeoSolutions has been involved in a number of projects, ranging from local administrations to global institutions, involving GeoNode deployments, customizations and enhancements. A gallery of projects and use cases will showcase the versatility and effectiveness of GeoNode, both as a standalone application and as a service component, for building secured geodata catalogs and web mapping services. GeoNode is a Web Spatial Content Management System based entirely on Open Source tools whose purpose is to promote the sharing of data and their management in a simple environment where even non-expert users of GIS technologies can view, edit, manage, and share spatial data, maps, prints, and documents attached. GeoNode was initiated in 2009 by the World Bank and OpenGeo but from 2011 is entirely run by the developer community that the project has been able to attract. It claims some large organizations among its contributors such as the United Nations, the World Bank and the European Commission as well as many NGOs and private companies. GeoNode is based on a set of robust and widespread open source components as Django as a basic framework, GeoServer for geospatial data management and OGC services and MapStore as mapping application. It can also communicate with PostgreSQL for vector data management. GeoSolutions has been involved in a number of projects, ranging from local administrations to global institutions, involving GeoNode deployments, customizations and enhancements. A gallery of projects and use cases will showcase the versatility and effectiveness of GeoNode, both as a standalone application and as a service component, for building secured geodata catalogs and web mapping services. Lastly, ongoing and future developments will be presented ranging from the upcoming integration with MapStore to the monitoring and analytics dashboard or the support for time series data. Authors and Affiliations – giovanni.allegri@geo-solutions.it alessio.fabiani@geo-solutions.it Track – Use cases & applications Topic – Software/Project development Level – 1 - Principiants. No required specific knowledge is needed. Language of the Presentation – English
|
10.5446/57281 (DOI)
|
Again, and we bring on stage here, Mark Jansen from actually two companies, right? In the Aless and Terrestres, you'll be by yourself, will Till also be coming? Yeah, he's supposed to be coming as well, but in case he misses this schedule, then I can always do it alone. Of course, and well, it will be easy for you. At least we'll be playing a pre-recorded video, but I still will introduce you. You are going to present Optimized Publishing Map, the data service with you, Sir, with you, Styler. Mark is from Bonn in Germany. He's general manager at Terrestres and Mondialis, and he is a long-time architect and developer for a range of front-end technologies. You may know him as core contributor with OSDIO projects, like OpenLayers, GeoX, Geo Styler, and his frequent speaker at conferences, he conducts workshops. And yeah, the other speaker is because they will present with the two of them in the video. So, well, Till Adams, also from Bonn, Germany, is the founder of both Terrestres and Mondialis, where he acts as a consultant and HL coach. He's co-founder of the Vostjes conference, that's a yearly conference of all German-speaking so-called Dach countries, and he joined OSDIO early and presently his member with OSDIO board, and Till chaired the Vosford G 2016 in Bonn, which was a great conference as well. So, and I think both gentlemen or just Till support the same football team with the acronym BVB. No, I'm a goatee supporter, I'm FC Cologne. Okay, that's good. Diversity. So, without further ado, I will start the video and hope that is all. Wow, let's see, Till is here. Hi, sorry I was a little late. I had another talk. No, that's fine. You can lean back and we'll try to play the video file, take some popcorn. Here we go. Hello from Bonn. We are Till Adams and Marc Jansen, and we're going to present the talk about optimized publishing of Mapser and data services with the GeoServer, GeoStyler and MapRoxy. The talk is not really for developers, I think it's better for users of the software, and we're going to show how you can do really fast and nice web map services with the three components. And in the end, we're going to show a little bit, an example. So, which road do we take? First of all, we talk a little bit about ourselves, but really short and a little bit also about the talk. We're going to present the components I've mentioned before, and we're going to talk a little bit about optimization in style and performance. And for this, we really have cool components we want to present you. And at the end, we're going to try to sum up the whole stuff and give you some one example at least, and also the URL to the example so you can see on your own. So, about us, first of all, I'm Till Adams. I'm a shareholder of the company of Terrestris. I had the honor to chair the Global Phosphor G conference in 2016 in Bonn. So, I know about the work that the organizers had before. Actually, I'm an Osteo board member and I work mainly as a consultant and agile coach. Let me also present Mark Jansen, my colleague. We worked together for nearly 15, 70, 80 years. I don't know exactly. Mark took over the management of Terrestris and also of Modialis, another company I found with my partners a few years ago. He's also an Osteo charter member. He's quite active in some very famous open source projects like OpenLayers, GXT. He's in the project steering committee and the core developer. And so, he really has a development and technical background. So, that's the reason why he's going to talk about more of the technical stuff. And I just do the framework around that. We come from the company of Terrestris. We are an open source GIS service provider located in Bonn. And in Germany, we started actually in 2002. And from the beginning on, we focused on open source software, started with human map servers, stuff like that. And yeah, our main business is planning, development of projects. But we also do some consulting, support, trainees. And last but not least, we provide a, in the meantime, really popular free open street map based WMS, which is worldwide. And I'm going to show that later as our example. So, about this talk, we're going to talk about first why this talk. And I think most of you have heard before about GeoServer, MapProxy, maybe even some of you heard about GeoStyler. But I think I said it before, it's really important. This talk is really for you as users and not so much for the developers. And if you look back a few years, I think in the meantime, it's really easy to create a simple WMS with open source tools. If you just install GeoServer, for instance, you have a back end, you go in the back end, configure your WMS. But if you look at that WMS, you almost would see that styling is still a topic and performance in regards of map services and the web is always a topic. So, we're going to show how to set up a well designed and fast WMS service. And this is still also very often a topic. We as company, we get requests. Recently, we set up a really high performance city map, a background map for city of Stuttgart and other customers we have. So, what we really want to show you is a little bit of our experience. But of course, what we're going to present is not the one and only solution. In German, you say there are many ways to roam. But of course, there are also other solutions in the open source cosmos you can use. But what are we going to show where we really made experience with and we want to share the experience at least talk with you. We scan the title for bus words or for the components. You see in the logos of the components we're going to use. It's GeoServer, it's GeoStyler. Probably you didn't hear about that yet. But this is really a cool tool and MapRoxy in the end. Okay. And these are the more technical stuff. So, I hand over to Mark, my colleague. Yeah, thank you, Till for this nice introduction. So, as Till said, I'm going to present you the components that make up this stack that we will present. First off is GeoServer. So, you might have already heard about GeoServer. It's an awesome OGC compliant server for GeoData. It's written in Java mostly and is really, really a great tool to basically take any sort of GeoData that you have. And by using GeoServer, you can take this data and produce a lot of great services that you can then use to mix up your own applications. So, it's really, really widely used. There's some other tools that basically do the same thing, but have a different technology stack. So, human map server usually comes to mind or degree. So, I already said it. With GeoServer, you're basically free to use your input, your data in the way it is stored wherever it is. So, you can use vector files, for example, shape files, raster data, but of course, you can also connect to databases, my most beloved one, PostJS, for example, or also proprietary ones, MSSQL or Oracle. And many people do not know that, but it's also possible to connect GeoServer to another server that is already producing compliant services like WMS or WFS. So, it's possible to pipe through the output of one server over to GeoServer and then benefit from all the cool stuff that GeoServer has under the hood. So, GeoServer can read a lot of data. That is what we have just learned. And make sure to also see the great talks over at this conference where more core members of GeoServer development team will explain in detail what they do and how they do it and the changes of the last versions. But GeoServer not only consumes a lot of data, it also produces a lot of services. So, the most important one probably is the WMS, the smallest denominator that many people can agree on. So, it can produce WMS with WFS for data, web coverage services for raster data, for example, and WPS and a lot more. So, now we know that we can publish a map and a data service. So, are we already done or is it not optimized yet? So, well, it can produce maps. That's what I already said. But by default, it just uses the default styling. So, in order to make an optimized service, you will have to take some time to configure the layout and the looking of your map, of the cartography, basically. So, inside of GeoServer, you use SLD-style layer descriptors to do this. There are other options like Cartus CSS, for example, or other ways to provide your style. But internally, style layer descriptor SLD is always used, which is XML, which is sometimes a P-I-T-A. And this is where GeoStyler comes into play. So, GeoStyler, there is a dedicated talk by Jan Zulemann, colleague from Terrestris, later this day. It's not an application. It's a library. It has, well, basically, it's about two things. It can read and write a lot of styling formats. And it also provides some components that you can use to create new styles. So, in case you want to style some thematic data, for example, you can connect your data source to GeoStyler and then, you know, like, make a nice thematic cartographic style for your map. So, make sure to watch that talk by Jan for more details. So, GeoStyler can help you to create filters and classifications for your map. It has the possibility to define different styles for the different ranges. And it can calculate, so, if you're creating a class-based or a classification and those thresholds are, you know, like, are overlapping each other, then you can have the calculation how many features are in both of them, and so on and so forth. It can be used standalone or it can be used or integrated in basically any web map that you want to do or application. So, it can read a lot of stuff. It can read the QGIS style, for example, the style from OpenLayers, partly at least, MapBox styles, maps of a map files. There's a dedicated talk by Mr. Benjamin Toyscher. I think that was already on Monday, I guess, and it's easy to also create new formats if you want to. So, in case your favorite starting format isn't on this list, just write your own style. It's easy. This is how it can look like. I'm not going to go into too much detail. Again, there is this dedicated talk. It makes it easier to work with this XML and all the others, all the other styling formats. This is how they can see a classification and also a preview where all the classes are being applied on some map. And as we are talking about GeoServer and Geostyler, there is also a Geostyler plugin so that the styling, which has improved greatly in GeoServer, can be made even more comfortable by using this Geostyler plugin. Then you can use Geostyler directly in your GeoServer user interface. So now we're done, right? Aren't we? Because we have a good-looking map now. It's served and styled. Everything is fine. Oh, no, I'm set to say we're not ready because one core feature of all map services should be the performance. And one thing to increase the performance of, yeah, basically static data or ones that doesn't change too often is caching of maps or single tiles. So in case you expect a lot of requests or you have a lot of layers that you want to group together, or if you're already experiencing performance problems or your server is always on the very, very heavy work, then caching of maps might be a good idea. So there's basically two options. Well, that's a simplification already. There's a lot of options that you have to cache your WMS. We will have a very brief look at two of them. One is GeoWebCache integrated into GeoServer, but also possible to run standalone. And the other one is MapProxy. So GeoWebCache, I already said it, you can use it directly from GeoServer. It's a nice tool. It can read a lot of sources. And it will basically on the fly creates your cache while you are, well, someone is using your WMS server. MapProxy, on the other hand, is not integrated into GeoServer, but it can be deployed on any service and it can read a lot, a lot of sources. So it's not bound to the WMS that you have in GeoServer. You can combine different sources. And it also puts out another, yeah, puts out a lot of services, also well documented in a great API. So this is the basic principle connect MapProxy in the middle of this picture to any WMS or tile server. And every GIS client can just use the WMS interface and get the data, but it's faster now because that cache is done by MapProxy. So cool functions are you can create a, well, of your WMS, you can create color changed versions. So one very easy way is to use a gray version of some colorful background map, for example. You can use, well, you can use one cache for several projections. The storage is very greatly optimized. So for example, if you have a worldwide map with a lot of oceans in between, so there should be a lot of tiles where, which basically are blue. But in MapProxy, there will only be one blue tile stored on the disk. And then you'll be presented with that when whenever it's needed. The security layer is quite cool. So you can, for example, clip layers in MapProxy to a boundary, let's say the boundary of, yeah, Buenos Aires, why not? So this is an attempt of a comparison of those two great software projects. Well, you have to look at them both. And since they're both open source, it's no problem to do it. I said a lot of it already. So GWAP cache is built in while MapProxy isn't, but that isn't a big problem. But the learning curve might be harder, a tiny bit harder for people that want to try out MapProxy, but rest assured, the documentation is great. So one thing that MapProxy basically does better than GWAP cache is it is always a fully compliant WMS. And it can also stretch in between zoom levels. So your cache always has fixed zoom levels. And if one WMS request would result in a zoom level or a resolution that is between two cache tiles, then MapProxy is able to interpolate tiles in case it is doable without too much loss of information. So a bit oversimplified GWAP cache WMS, in order to work the direct integration, you have to have some specific parameter inside the WMS request, which is tile equals true. Inside of the documentation, there is some way of, you know, like turning this off. But there's a couple of things that you need to do in order to get GWAP cache really, really greatly working while MapProxy was proved easier for us. Yeah, so now it's time for our tilt to take over and show one example of how this can work all together. Okay, thank you, Mark, for the technical presentation of the components and a lot of insights into the components. And I'm going to show you now one example. This is, as mentioned before, our free to use worldwide open street map based WMS. To be honest, it's not only free, there's also premium version. But if you go on the website, you find the URL and you can directly include this WMS service into your applications. And if we switch over to our demo page, we see the map around here and the very simple web mapping clients. And you see it's really nice fast reacting WMS. The tiles are served by MapProxy. In the background, we have a geoserver, we have a lot of styles. Yeah, to be really, really honest, I must say that we didn't really do the styles with Geo Styler because we started the project with the WMS service a little earlier than we started the Geo Styler project. But it's still possible to do things you can see here also with Geo Styler. Yeah, and that's the Zoom Tube, Buenos Aires, the place where we all would like to be in the moment. That's at the end, you can see also a small one. If you go on that page, services, but I think OWS.drestris.de, we've linked it in the slides, you can try out the end, the service and also some other services around. And they're all based on the same technology and quite nice looking, pretty fast services here. I could say and now you're really done. You can really have all the tools and some ideas how to produce and optimize map services or data services out of your data. And yeah, at the end, we can say, okay, everything it's open. So cool software, you can do a lot of it. We've seen that various open source components are quite nice, combinable, although there's a map proxy community, a geoserver community, a Geo Styler community, we can stick it all together into one running environment. And of course, there's often more than one component to reach your goal. Of course, you can use the same software stack instead using Geo Server. You can use human maps or degree or even proprietary software. And I think the presented architecture has already proven its suitability for setting up a good looking, fast, robust map services. I think you've seen the example and I think that stands for itself. Here's Mark again. So thank you for listening to our talk and again our contact if you have any questions or whatever you have. Yeah, well, very nicely done video. Except the end of the video. Sorry, I probably just missed to delete a piece of the video. So there were overlapping voices. Oh, I thought I heard voices in my head. So we're all saying. Thanks very much. On time, let's see if we have some questions from the audience. Yeah, we have at least two questions. I bring them into the banner. So the first question is I read it. OTC API, will there be support for the approved feature API in the near future? I'm not sure. Well, maybe you can make something out of this. Yes, I'm pretty well. I have an idea of what what is problem and so we showed several components and I think only two components can be considered when this question with regard to OTC API is being considered. So the first one is Geo server, which we showed to be basically our map providing or map back end if you want. So and there is already there is already a extension for Geo server to support the current currently discussed and still in shape in shape. Well, not finalized API. So if you use Geo server, there's a big chance that you can install this extension. I also can post the link in the in the chat later on. I've got it here. That's one thing. And the second thing would be whether map proxy our caching solution or one of the solutions that are possible, whether it will support OTC API features or maps or tiles. So right now, there is no such support. Let's say for OTC API tiles, which is basically the successor of WMTS and stuff like that. It's still being shaped up and well, also the specification right now. There is no concrete effort that I know of to have it implemented in that proxy, but this can change. So in case there are enough interested parties just raise an issue open issue over there. And as always and as the speaker before us has told, it's always a part of funding right now. We don't write the immediate demand for it on ourselves. So in case someone opens up his head or let's the head go around and some enough people put in some funding, then we can of course make it happen as a community. Yeah, sure. I think there's even an open longstanding open issue on that proxy repository for support for a factor piles. But that's a factor tiles, but for the would be interesting also for the OTC. Yeah, there's a lot of excuse me just for interrupting you twice. Sorry. It's hard. It's hard with the you know, like the lagging, whatever. So there's a there's a couple of open of open issues or BRs over at map proxy. So also as always, when I'm presenting something like this, this is not just for consumers. It's also for makers. It's for people that want to participate. It's easy to start and it's exceptionally easy to start with map proxy because it's just Python. You know, everybody knows Python. And just dig yourself into that code and help us all by by submitting some pull requests or reviewing something and testing it out in your environment that would be much appreciated. Yeah, sure. And there's another question here. Is it interesting as a city to map proxy the regional OWS? What should be the interests? Probably the I understand the question as if it makes sense to to cash the WMS of a city. And in general, I was would agree on that because it's not only performance that you you win. But it's also you you get lesser load on your servers because not every time a request goes to the OWS, OWS, you have to produce an image because your map proxy just goes in the cache and puts out the image from there. So that's definitely a plus point for that. Of course, it is not always clever to do that because if the data changes every day, you have more to do with caching than really having benefit from it. So that depends a little bit. But for any kind of background maps, aerial images that normally don't change so fast, I would definitely say yes. It should be interesting for you. If I may add tiny jot. So as soon as you start taking care of your performance of the service that you do, and we always do that, well, most of us do it once they have, you know, like the feature fully featured and done product. Once you start with that, you get sort of hooked on it. So you would be surprised at how many services can be very easily cached without too much do and without too much configuration. So once you've set it up, and even if the cache only helps, let's say 50% of your users, once they are, you know, like it's being filled up, well, those 50% of your users are now happier and they will, you know, like, don't get in the way of them. And most of the time caching is a very, very good solution. And there are, of course, exceptions, like very often changing data and so on and so forth. But maybe it's time to, you know, think it the other way around, like for example, in the web, in the web, HTTP web, the standard for every request is basically to, you know, like, don't let it expire every time that any every resource has to be re-requested all the time. But, you know, all of them have short periods where they are, you know, like web resources are suggesting to clients or browsers, hey, you can, you don't need to ask me any time for the new picture of till, for example, it hasn't changed in the last two minutes. So maybe it's time I could change on that thing. Yeah, caching, caching, caching. And thanks very much again. Yeah, we're approaching time for the next talk till and mark. Yeah, very, very interesting to hear and hope people also learned a lot from you. And, well, we'll be, we'll be talking soon, I think. And we go over to the next two speakers. Okay. Bye bye. Thank you. So here we have Tom, James Banting and Tom Christian from Spark Geo. And in the meantime, you could try to share your screen. I think James will be sharing his screen. Yeah, you can hear me. Okay. And let's see. Well, there, they will provide a talk called there and back again, lessons learned in transitioning from Geo server to met proxy. Again, met proxy. That's good to hear. So it's will be dual presentation and James and Tom, they are from Spark Geo, a very innovative company from British Columbia, Canada. And in short, Spark Geo helps customers to make sense of your spatial data and maps providing analytics insights and development support. And James.
|
At the beginning of this century, the very existence of geo-services based on an uniform API like WMS aroused admiration. Today, having more than 10 years of INSPIRE behind us, this question often no longer arises. With software-projects like UMN MapServer, GeoServer, deegree or QGIS Server – to name just a few – there are notable solutions that can be used to transform geodata into standardized services. Once your data is published as WMS (or WFS, e.g.), one can rely on many additional tools, functions and interfaces. Thus a non-experienced user is confronted with many tools but also with the question on which tools can be used to achieve an optimal result for his or her personal task. The talk presents one Open Source toolset for the set-up of geodata-services that consists of GeoServer/GeoWebCache, GeoStyler and MapProxy. In my talk I will present one rock-solid possible solution for the setting-up of high-performance geodata-services. The presented solution has proved its usability and is the base for the world wide open and widely used OpenStreetMap basemap-service „ows.terrestris.de“. The talk focusses on the OSGeo project GeoServer but will also present the OSGeo Community Projects GeoStyler and MapProxy. The solution is vividly presented by means of a few examples and the talk is peppered with some hints on styling, performance tuning and caching of services. Authors and Affiliations – Adams, Till Jansen, Marc terrestris GmbH & Co KG Germany Track – Use cases & applications Topic – Data visualization: spatial analysis, manipulation and visualization Level – 1 - Principiants. No required specific knowledge is needed. Language of the Presentation – English
|
10.5446/57282 (DOI)
|
We are going to have Angelo's and Astrid and Nicholas who will speak to us on the state of the OSGO live project. So let me just... Hello. Hello everybody. So a minute I will share my screen. Here we are. And then we can start. Okay. I hope you can see it now and it's great to be here at Phosphor G 2021 in Buenos Aires. And we from the OSGO live team would like to make a project report and tell you what happened in the OSGO live project in the last year. OSGO live is your open source dual spatial tool kit and we are from the OSGO live team and send greetings to the others of our team. So let's have a look. First maybe not every one of you may know OSGO live so we would like to give you an introduction, a short one. On the picture you already can see OSGO live desktop. So it's easy to use. You have a lot of dual spatial applications ready to use on OSGO live and it's the GNU Linux distribution which is including the best free and open source dual spatial software all together ready for you to go. Here you can see the desktop, it's an old map from Buenos Aires so it fits absolutely good to the Phosphor G this year and we provide every year data and desktop from the Phosphor G where the Phosphor G takes place. And we have around 50 open source dual spatial applications on OSGO live. There are pre-configured software projects and they are installed and we also provide sample data sets. We have for all the projects we have consistent overviews and quick start documentations that help you to find out about the software and to give you the first information to start how to work with the project and we provide translations to many languages already and you will see later in the demo what we can offer and in which language you can use already OSGO live. So if you want to use OSGO live you can use it in different ways. You can burn a DVD with OSGO live. You can create a bootable USB drive or you can run it in a virtual machine. We recommend that you will run it in a virtual machine. And then you can start and the goal for OSGO live is that you don't have all the challenges that sometimes the installation brings with it and you can try all this great software already and also have this data that you can use immediately. We also care about quality criteria so we provide established stable and working software or the projects are tested before we release the next release. We take care that there's an active community and we have a page where we provide metrics. On the screenshot here you can see the summaries of the metrics from OpenHUB and every metric is linked to the project page from OpenHUB where you find more information about the community size, about the activity and furthermore. Then we have production and marketing. We have a pipeline so we have a regular cycle, how we publish OSGO live and we have different parts of people active in this pipeline. So we have the developers who provide the applications and they test everything and provide the new versions, then we have the OSGO live team that builds and tests the applications all together, then we have the documentation team and the translators and the reviewers. So for the project teams it's great to have translators and the documentation team writing the documentation and doing the translations. Then we have the conference teams so people who run conferences or workshops, they are happy to use OSGO live in their settings and can provide OSGO live on a USB drive, USB stick or use it in their workshops and then we have the website where you can get all the information about OSGO live. So for decision making this is a good point where you can get a great overview on OSGO projects. And here we want to have a look at the download statistics. So this is from the previous version which was version 13 and you can see in the, in two years we had over 30,000 downloads of OSGO live and this shows how the interest it is on the project. And if we have a look on the map and see in which regions OSGO live was downloaded, it also shows you where all on the world it is used. And you have to keep in mind that one download, it could be an ISO or a VMDK, it does not mean that it is only used from one person because with one ISO you could provide a whole workshop room or you could provide thousands of USB drives. So this number doesn't show everything but it shows that it is really a big interest in OSGO live. And we have version 14 now which was published in May this year and here we have already nearly 10,000 downloads already. Talking about OSGO live 14, we would like to remember Malena. This version has a special name, it's called OSGO live 14 Malena because we would like to dedicate this version to our friend Malena who had passed away short before we released this version and we were very shocked and really miss her already. And now I'm passing to Angelos and he will help us with version 14. Thank you Astrid. So now I'm going to talk a bit about what is new in this version 14. Technically speaking we are inheriting stuff from Ubuntu so we have been re-based our distribution to Ubuntu 20.04 LTS. This was a major change because Ubuntu suites from LXDE to LXQT so we had to reconfigure our desktop, figure out how to change the menus so we had a lot of work to do for this new version. The pipeline of packaging is the same which means that we are inheriting packages from Debian GIS which are going upstream to Debian and then those are coming to Ubuntu and to OSGO live. But we are also working on synchronizing the OSGO live PPAs, the repository with new packages from Debian GIS and Ubuntu GIS which means that we are trying to get the latest and most stable packages to the distribution so we were able to upgrade most of the projects in OSGO live like QJS, JDAO, Prod, Prod GIS, Geo Server, Map Server and many other projects. We had new projects that were added. We added PyJet API, Geo Styler, Registry from European Commission. So there are new projects that are joining OSGO live and we are trying to keep it up with the new projects that are joining the OSGO community program so we are trying to include all the new community projects from OSGO to OSGO live but of course we are waiting for other projects to submit their application to be included in OSGO live. Then we have additional Python modules added and we also have our projects that are added so we have a specific interest for data science so we have Python and R and Jupyter notebooks included and those are maintained actively. And we have two versions of the OSGO live. One is the ISO version which is the live version. You can run it directly from the ISO or from a live USB and then we have the VMDK, the virtual machine version which has even more software this year because we reached the ISO limits and that means that all the new projects are landing into the VM version only at this moment. You can see a full change log of what is in version 14 in the link provided here. Next slide please. So what else is new? We had a major update of documentation. We have a new command line tutorial thanks to Anok and Astrid for compiling that. We did major improvements in the OpenStreetMap tutorial and we are cooperating now with the Humanitarian OpenStreetMap team thanks to Anok and Astrid. Brian has contributed many, many great Jupyter notebooks and he is maintaining them so that data science people are happy with it and it can be used in classrooms and all around the world. And we added OpenStreetMap data for Buenos Aires since we are virtually even in Buenos Aires right now. Next slide please. More stuff. We added new languages so now we are supporting the languages that you see here, the English, Dutch, Spanish and many, many other languages. We reached the translation levels to 100% for Hungarian but also for Spanish and we would like to special thank Martha Vergara for doing most of the Spanish translations so the Spanish language is again at 100%. So please if you are interested please join us and add your translations. It's very important for students to have the OSGO live tutorials and all overviews and quick starts in their own language so please join us. Let's go to the next slide please. As I already mentioned we had several challenges for version 14. We reached to the new desktop environment and we had many upstream Ubuntu changes that block our development for a while so we needed an extra time to cope with that. Our packaging efforts were a bit slower than usual due to the COVID pandemic and we still need lots of testing so please if you are interested join us and test the distribution before we release it because after we release it is more difficult to change stuff. Next slide please. And also we are now in the cloud era so OSGO live is perfect for the cloud. Most Debian GIS and Ubuntu GIS packages are used in Docker everywhere and also in virtual machines on the cloud and OSGO live has been reported to work and to work fine in many cloud environments. Recently we discovered that it is actively used in ISAs, Diaz infrastructure and specifically Creo Diaz which works over OpenStack so we are very happy that ISA is doing work using OSGO live currently. Next slide please. Yeah and now I am passing back to Astrid. And if we talk more about OSGO live in action we would like to mention Marco Minghi who gave us feedback on our project. He is a scientific project officer at the European Commission and he can see what he says. Open source geospatial software is a key building block of many data infrastructures managed and operated by the European Commission, powering high level European initiatives such as Inspire and Copernicus and the role of OSGO live to teach and promote the use of open source geospatial software has no equivalent. Big thanks to OSGO and the OSGO live team. So this is both a big honor for us and hope that others also profit from OSGO and enjoy to use it. And I can report from the FOSCIS conference which takes place every year in the German language area that we use it there successfully in our workshops and also at FOSFIGI here this year it was already used into workshops. So let's have a look at our roadmap. So we are preparing for OSGO live 15 which will be released next year and has to be ready for the next FOSFIGI 2022 in Florence and Italy. And we plan to use the Alobuntu version 22.4 and we want to include new OSGO community projects so you heard the shout out already from Angelo's and we plan to include a glossary from the lexicon committee and working together with them at the moment. We want to write more documentations because there are projects that are hidden. They are installed already on OSGO live but have no documentation so they can get more visibility. For example it's Pidel which is there but there's no documentation yet. Then we want to improve the usability for hot users. You heard about maybe about the OSGO hot memorandum of understanding that was signed and one goal is to bring hot and OSGO live closer together. We want to improve the translations and if you would like to support us please apply with your project and you are really welcome to get involved. We are a nice team and you can learn a lot and share ideas. And now we want to have a look at the projects and I pass to Nikola who will give you an introduction. Thank you Espen. So we divide the whole project in 10 major categories. The first one we want to show you is the desktop.js category. It's the largest desktop application, the most famous like QJS or GlassJS but we are also less common or less known projects like JBCIG or UDIC or OpenJumpJS or even SagaJS but also embed within OSGO live so you can find new applications, access to new algorithms within the application and access documentation, overview and tutorial as well. Next slide please. We also have browser facing JS tools like OpenJS and Leaflet which are very common now but you also have Cesium and Joe Stiber which are very new and server application and SDI application like MyBender, GeoEc, GeoMove and GeoNet to help you build a small SDI within the application. So this documentation is also within the disk maybe Angelos would like to show you where you can find this documentation and this presentation in the documentation when I'm still speaking so if you click on the help or open the browser you get to the web page and in the contents page you can find the 10 great categories you find so we saw the browser facing JS but we also ship web services, the Mots command, the map server or GeoServer but we have a large panel of projects that can provide web services so you can try some with your application and try your processing chain. On the next we also ship data store like Poges you can do some routing within it because we ended a Fijian routing project. You can also share Raster with Rastaman server and use a special databases. We also provide some tools for navigation and mapping like Marble, the ID editor and GeoSM software from the GeoSM community and other tools. We also have some special tools which are specific to a domain like our Fiori toolbox which is an image processing tool but we also ship as we say data sense tools like Jupyter and R with the GeoSpecial libraries and we also have some domain specific JS like Xigrid I'm not sure the pronunciation but for the web app, forecast maps and we also ship data that you can use within the tutorials so you have some data from the tutorial for North Carolina and we also have all the GeoSpecial libraries that are used by the project like Jdoll, JOS, Proj or Jts. You can find more information about the standards but command to all those tools and allows them to communicate. Thank you Angelos I think. We can access all the tools within the GeoSpecial menu and you can see, you can find all the tools organized in some area. So yeah I would like to add a few things here like you can see here that we have all the OSGO projects marked with a color icon and then we have the community projects that are joining now with the black and white and then you can see that the projects that are only available in the virtual machine version are marked here as well. For every project here you can find an overview and translations are available. This is why we are asking for more translations so that we can have the text in many languages and also you can find for every project a quick start so you can actually learn by doing a simple tutorial, a simple exercise within the OSGO Live DVD. The data that we are using for the quick starts are already bundled in the VM so it's really easy for somebody to learn how to use those projects even with a small tutorial. You can find some other quick starts in the landing page which is how to install it, how to create a bootable USB flash drive, how to run it in another environment and also there's a presentation available here which is the presentation we are now giving where you can do it directly from the disk. I think that's about it. If you need to find out the passwords or to see where the data are, there's a link on the desktop. We have the workshops available that are using the OSGO Live so the desktop already that used OSGO Live in this Foss4G are there listed and well, that's about it. We don't have enough time to go through a quick start or something similar but we can have some questions and answers now. Maybe I can mention some more if we have time. No, we don't. Sure, go ahead. Okay, so we can credit all the people that are involved. We have all these developers that maintain the projects and the documentation team translators. We have the PSC which you can see here. We have sponsors that support us with hardware or infrastructure or other things and maybe it could be interesting for you to get involved and there you could contact us on the mailing list. You can meet us on RSC and there are several ways to get involved. You could work on the project, on testing the website documentation and you're really welcome to join. Translation is done on TransEffects and we will have the Foss4G Community Sprint this Saturday. We will be there with the OSU Live Team so you're really welcome to join us and get to know the OSU Live project and the team. That's it from our side. Great, thanks guys. Very interesting presentation and very valuable tool for the community to basically download all these and use these tools right out of the box for an SDI in a box if you will. I'm looking at the question. Here we're seeing is there a date for the launch of QGIS version 4? Maybe related to that, how do you guys deal with versions of the various projects and how do you balance this all? Yeah, that's the tricky part. We are following some, let's say, Debian rules which means that whenever something is stable and it's uploaded into Debian unstable, we are testing it and we see if it's ready to be shipped. Sometimes we are trying a more new version if available but we are trying to remain on the stable side of things. This is why, for example, QGIS that is mentioned is not on the latest version on the disk, it's one or two versions behind. For QGIS 4, I would not expect this to land on the next OSU Live even if it is released from the QGIS team. It might take another version to show up. But maybe we can mention one thing. We publish OSU Live with all these projects but in the end you are free to install more programs or install newer versions. So it's not fixed, it's flexible. Excellent. Nice to see the extensibility. Another comment I think, I hope the maplibre.js and other vector tile related OSS projects will be included in the next release. You guys have any comments around vector tile projects? We need to have more volunteers to show up and maintain those projects or make the application for those projects to enter. So what we need is for a maintainer to step up and say, okay, I will, for example, apply to get a new project in OSU Live. Then one has to provide the installation script, the quick start and the quick start and the overview page. So if you have all those three, if it's open source, we will try to include it in the next version. So we do have a vector tile server included actually in version 13. But we definitely need more. So if people are interested, they can help us because it's so much software, we cannot maintain it on our own all the time. And maybe you don't have to be a developer to provide a project to OSU Live. You could be a user of the software and do the work, do the documentation, do the testing and do your part to bring a project on OSU Live. Great. Another question, how does translation work? Is it possible to update the website before a release? I will take, I will be speaking about translation. So the translation is mostly done on Transifx platform like a lot of OSU projects. I send the link within the chat. Maybe you should have seen it before. So you are free to join us and we have around 20 languages available. We just ship languages but make sure enough to be shipped. Mostly it's all overview translated and we ship it in. So we have enough material in one language to include it. The second question, is it possible to update the website? The website reflects the documentation within the ISOs, we don't like to change it too much. We update it maybe once or twice a year. So it's kind of stable to match the ISO and we only fix the typos in the second. So we ship a new version with maybe a new version of QJS. It won't be working with a later version. So it has to be stable. But we have two versions also. I expected on the website. We also have a GitHub page version which is more recently updated. I hope it responds to the question. Great. And just to recap, getting involved is as easy as signing up and adding your project or volunteering to do some documentation or testing. So it sounds like there's a very low barrier in terms of getting involved in this open community and an inclusive project. Awesome. Yeah, that's what we do. Thanks. That's awesome. Great. Thanks guys. Great presentation. Thank you very much. See you soon to HL. Bye bye. Enjoy the conference. Thank you. Sorry.
|
OSGeoLive is a self-contained bootable DVD, USB thumb drive or Virtual Machine based on Lubuntu, that allows you to try a wide variety of open source geospatial software without installing anything. It is composed entirely of free software, allowing it to be freely distributed, duplicated and passed around. It provides pre-configured applications for a range of geospatial use cases, including storage, publishing, viewing, analysis and manipulation of data. It also contains sample datasets and documentation. OSGeoLive is an OSGeo project used in several workshops at FOSS4Gs around the world. https://live.osgeo.org The OSGeoLive project has consistently and sustainably been attracting contributions from ~ 50 projects for over a decade. Why has it been successful? What has attracted hundreds of diverse people to contribute to this project? How are technology changes affecting OSGeoLive, and by extension, the greater OSGeo ecosystem? Where is OSGeoLive heading and what are the challenges and opportunities for the future? How is the project steering committee operating? In this presentation we will cover current roadmap, opportunities and challenges, and why people are using OSGeoLive. https://live.osgeo.org Authors and Affiliations – Tzotsos, Angelos (1) Emde, Astrid (1) (1) Open Source Geospatial Foundation Track – Software Topic – Software status / state of the art Level – 1 - Principiants. No required specific knowledge is needed. Language of the Presentation – English
|
10.5446/57283 (DOI)
|
And the presentation, can you see the presentation? OK, I'm going to start speaking about professional multi-user editing with GBC Desktop. The latest version, final version of GBC, is the 2.5.1. You can download this version from the GBC website. And we are working on the new version that is GBC 2.6. You can download the development builds from the GBC website, too. And I'm going to speak about the new functionalities about editing. In this case, I'm going to start speaking about several functionalities that are not specifically about editing, but they are related. For example, the layer display that has been improved in the last version in GBC 2.5.1, there has been new options at this version where you can manage the visibility of the layers. You can new options have been included in this case. About element selection, there has been two new functionalities. In this case, for the select by rectangle tool, if we press Shift key and we draw the rectangle, we can get only the elements that are completely inside the rectangle. So it's a new tool that is very useful if we are editing our layer. And also, for simple selection, if we have several elements that are overlapped, we can press Shift key. And then if we click on the elements, we get a window where we can select the elements that we wish. So there are two new tools that are very interesting for editing. Another one is the angle query. We have distances and areas queries. But now we have angle queries that has been included in GBC 2.5.1. I'm going to show you a video. For example, this is a real case, a municipality that has the horizontal signs of the municipality. They have a template with this information, with all the different type of arrows, signs, et cetera. And they can copy the elements to the layer, to the big layer. And after that, they can get the angle between the line, the street, the street line, and the arrow. For example, here we have the first line, now the arrow. We get the angle at this place. And now we can rotate the sign in parallel to the street. So this is our first video. I'm going to continue. And now I'm going to speak about, especially, the advanced editing new tools. For example, the expression manager. Now there's a new expression manager where we can apply different expressions, not only the simple, but the advanced ones. For example, we can apply these expressions for the element selections, for the field calculator, et cetera. One example is that we can, for example, we can select all the elements that are in a concrete distance from the elements of another layer, another different layer. So we can use the functions, the project, all the functions that you can see here, point intersects within this join. There are a lot of functions that we can use with this new expression manager. About data copy, we have two new tools. For example, now we can copy alphanumeric data from one element to another one. It's very useful if we are drawing a new element. And we want to keep the same data of the table of another element. So we can copy the alphanumeric data from this one to the new one. It's very, very useful. And another one about this type of tool, the data copy, is that now we can copy geometries on an insertion point. We had the option to copy to the original coordinates. But now we have a new option that is that we can copy on an insertion point, clicking on the view. We also have about point insertion. We can capture coordinates. And we can save these coordinates that we can use in another tools, for example, for editing or for your processing tools. We can save these points with the coordinates. And we can use the points in another tools, like editing or your processing. Another one, another improvement, is the use of expressions. When we are editing, for example, we can insert points with relative coordinates, for example, with this type of expression. And if we use a number, 0 is the last point, and 1, 2, 3, et cetera, is the second to last, third to last point. So we can use these options. And it's very useful when we are drawing a polygon line, et cetera. We also can use the save points with this expression. And we have included a new expression called point-by-angle, where we can work with polar coordinates. I'm going to show you a short video about that. For example, we use the point-by-angle expression, the last point, and distance, and angle. So we can get this new line. There's a component. We also can use 0 is the last point. But if we use 1 or 2, it would be the second to last or the third to last points, for example. Here we have another option, another different expression. We can use distance and angle. And we also can use not the point, but the coordinates, x, x, and y, for example. First one from the x of the last point, and second one, the y of the last point. So we have different examples of the point-by-angle expression. I'm going to continue. And new tools have been included about drawing circles and single circumference for polygon or line layers. For example, now we have two new tools, from two tangent lines on a point, and from the tangents of two geometries under radius. I'm going to show you a short example. This video, we have two geometries. We have that the distance is, we have the distance for a 63,000 meters, for example. And if we use this new tool, if we put 25,000, for example, it's not correct because the distance is more than the double. So now if we put another one, 40,000, we can get the new geometry. And if we put active layer to different layers, for example, lines and polygons at the table of contents, we can apply these tools on different layers. For example, we have these two elements that are different in different layers. And now we can apply these tools on these different layers. It's another example. About the arcs, we have new options. When we are inserting arcs in GBC, we have from the center and start and end points and from three points. But we have now different optional parameters. For example, the radius, starting angle, sweep angle, and direction, clockwise, et cetera. So I'm going to show you a short video where we can see some different options where we are selecting the radius, the start radius, et cetera. So this is our first example. But now if we open the contextual menu, we can get the different options that have been included in this tool. Here we can see the radius. We also have the start radius, the sweep angle. So we have different options and the direction. So you can see that we have selected an start angle and they are different options. I'm going to continue with new tools. For example, for ellipse and fill ellipse in polygonal line layers, we have created new options with the axis and the center. Snapping tool, we have a new feature. And in GBC 2.4, the previous version, we had to go to the preferences of GBC application to open the snapping tool options. Final point, nearest point, central point, et cetera. But now we have included the toolbar, a button, where we can open this window and we can keep this window open when we are drawing. So we can change the different type of snapping in each moment when we are drawing a line or a polygon, et cetera. We also have this option for the snapping where we can apply snapping to different layers of our view. And we also have included the parallel geometries for line and polygon layers. And a very important improvement is that we can click on the screen to select the side where we want to create a new parallel element. So before that, we had to select right side or left side. But now we can select it clicking on the screen. And another tools that have been included are Trim and Extend, Align, Autopolygon, two different layers. And finally, the latest great novelty in GBC2.6 is the version control system. This new tool, it's a very interesting tool that has been included in this last version that allows us to manage changes in vector layers. With this tool, we can edit vector layers and recover the situation of the geometries in a concrete time, in a concrete date, or to a concrete version. So this is based on the centralization of information that is shared between different users. So we can connect to the server, and we can download the layer to a local layer. We can edit, and we can upload only the changes of this layer. So it's the main problem. There are two users that are reading the same file. One of them started editing. Then the first one upload the changes. And the second one overrides the first version. So there's a conflict that we need to resolve. The solution that has been rejected is blocking the layer. It was a problem because only one person could edit the layer in a concrete time, so only one person. With the adaptive solution, that is copy, modification, and merge model, several users can edit the same layer at the same time. And only the geometries that have been modified can be uploaded to the server. So the conflicts are only a few times. This is the example of the menu. We have the version control system GIS, and all the tools that have been included in this, in this new feature. Here we can see an example where we can see all the geometries that have been modified, and we can upload the changes. And the workflow, there's a workflow where we can upload only the changes. And if there is a conflict, we can select what we want to do. So we can go back to a concrete date, to a concrete version, et cetera. And I'm going to show you a video. It was a 45 minutes video, but I have summarized in three minutes, more or less. So we can connect, at this time, we connect to the repository, and we can select two options, or to a local repository, or to a remote repository. For example, in a municipality, they can have a remote repository, and all the users connect to this remote repository. And a local copy is created in our computer. So we can connect, and we can create the local copy. It will be the copy that we will edit. So we are connecting. In this case, we can select remote connection, but we are selecting a local repository. We are connecting to the data model to select the data that we want to download. And now we are loading. We are adding layer. In this case, we are adding base layers, and the horizontal and vertical signs of this municipality. We are loading all these layers. Now we are going to start editing. We will edit the horizontal sign layer. We also can create our own form with the mandatory fields, with the fields with different parameters, for example, the combos. These combos can be created in an external table. So if we want to add a new option at this combo, we can add directly the new resistor on the table. So we are editing this arrow. We can move geometries. We can edit the vertices of the geometries. We can create new geometries. And we can fill in the vertices. And we can fill in the parameters of the alphanumerical information. We can get the information. If we have an error or a problem with one of the geometries, we can see this in red color. If we go back to this geometry, we can see that there were mandatory fields that we haven't filled. So we can see in an easy way which elements had problems. So now we can see that there were mandatory fields that weren't filled. So now we can fill in all these fields. And finally, we can save editing. So it's an example. And after editing, we are creating new geometries, et cetera. And after that, this is another example with the vertical signs. They are connected to a bracket. And we can change the rotation. The image of the sign, the vertical sign, the position, et cetera. And after that, we can finish editing. And then we will upload the changes. Now we are going to see. We have all the changes here. In green color, we have the horizontal sign layer. And in blue color, we have the changes that we have made on the vertical sign layer. So if we upload the information or the changes, they will be uploaded to the server. And if there is any geometry that is in conflict with another user that have edited the same geometry, we will get a message in the repository tab with this geometry. So we can decide if we want to keep our geometry or the geometry that has been modified by the other user. So it's an example that we can have with the version control system in the new version of GBC test. So if we upload the changes, we will see that we have the table updated. So it's an example. And that's all. They are all the new features that are in the latest and in the new version of GBC. And if you have any doubt, you can ask me. You can download the both versions from the GBC website. And if you have any doubt, you can ask now or you can write to this mail address. Thank you very much. Thank you, Mario, for your talk. And we have one question. Yes. Assuming that we have an existing Postgres, PostGIS database that is edited by QGIS, can we use the version control system of GBC on top of the existing database? I would like to know the compatibility with database PostGIS, which is usually used by QGIS. Thanks. Yes, you can connect to PostGIS from GBC desktop. So you can edit this layer and use the version control system. An example, GBC Online works with PostGIS. And we can connect to this PostGIS database and edit directly on the database from GBC desktop. So it's the same with QGIS. So you can edit with QGIS. But for the control system, you can connect from GBC desktop. So you can use it. Great. Thanks. For now, we have no other questions. But we have five minutes that we can wait a little bit until the next talk. Another one. What if we deal with large geometries and multiple users edit the same geometry and all our correct changes? Is it possible to resolve conflicts so all are accepted? Yes, I haven't tested with big geometries. But I think that it will work. For example, the municipality that has a contract that has developed this new tool for. We have developed this new tool for a municipality. They are working with small geometries in general, blocks, signs, horizontal signs and vertical signs, very big layers. But I haven't tested with big geometries. But I think that it will work good. OK, great. Thanks. Well, I think this would be it for now. And we'll go on with the next talk. OK, thank you very much. Thank you for your talk. Thank you. See you around. See you. Bye.
|
During this presentation, the participants will discover the new version control system and advanced editing tools that have been developed for gvSIG Desktop. With the new version control system, users will be able to edit vector layers and recover the situation of the geometries in a concrete time, very useful for some projects such as the vertical and horizontal sign management on a road or in a municipality. In addition, several advanced editing tools have been included in gvSIG, in order to increase the potential of vector editing. The version control system is a new powerful tool that has been developed for gvSIG Desktop. It is based on the centralization of information to be shared between users, and unlike a normal server, it remembers the changes that have been made to their data. It should be noted that it doesn't store only information, but also the information as well as the modifications users make on it. Apart from the version control system, the last version of gvSIG Desktop includes not only new functionalities for advanced editing but also improvements in the existing tools that have increased the potential of the application. The most outstanding novelty is the new expression manager that has been applied to the filter tool, the field calculator or the editing tools, allowing the selection of elements or the fill in of registers on the table based on geoprocessing tools. Authors and Affiliations – Mario Carrera. gvSIG Association Matéo Boudon. gvSIG Association Track – Software Topic – Software status / state of the art Level – 2 - Basic. General basic knowledge is required.
|
10.5446/57284 (DOI)
|
Okay, we'll get going with the next talk here. We've got Michelle Tobias and Alex Mandel. Michelle is a geospatial data specialist at University of California Davis and she's also a board member of OSGEO. Alex Mandel is a geospatial engineer at Development Seed and Michelle and Alex have collaborated on a QGIS plugin to add a geospatial dimension to research citations and they're going to share some lessons from their experience to help break down the barriers to creating your own QGIS plugin. So the talk is pre-recorded but Michelle and Alex are here today, they're here right now, it's 4.30 in the morning in California so show some mercy but anyway they're here to answer some questions after the talk so yeah please add your questions to the Vennilis platform. So I'll fire up the video here. Welcome everyone and thanks for coming to our talk which is entitled QGIS plugin development is not scary. We've learned from literature mapper. So who are we? My name is Michelle Tobias, I am the geospatial data specialist at the University of California Davis and State of Lab and I'm the mastermind behind literature mapper I guess you could say. Most of my coding for this project has been in the realm of the text parsing that needs to go on in making this plugin work and I just want to confess I'm really bad at Python, I don't use it on a daily basis so every time I come back to it it's like starting over so hopefully that's a little bit of an inspiration that you can do this to even if you're not great at a particular language but I am a frequent user of QGIS, R and SQL so I do have a programming background just Python isn't my daily jam so. So my co-presenter today is Alex Mandel, he is a geospatial engineer at development seed. For this project he advises on things like feasibility when I come up with grand ideas he helps me figure out how this is going to work. He did most of the coding for the interface functions and some of those kinds of things and he is much better at Python than I am so that makes us a good team for this. And I just want to have a quick disclaimer that literature mapper is not a product of UC Davis or development seed this is something we develop on our own time so it's not paid for by our respective companies. So what is literature mapper? It is an experimental plugin for QGIS, it's written in Python and it is currently available in the plugin repository. It is a tool that connects Zotero which is an open source citation manager with QGIS which you guys are all familiar with I'm sure. And it allows users to georeference citations through QGIS and it stores the location information alongside the Zotero database so it puts it actually inside Zotero and stores it along with all of your citation information like author and title of the text and publisher and year and things like that. And it keeps all of that spatial data together with the citation information so everything stays in sync and you don't have to worry about having two files, you know, your spatial data separate from your citation so the idea is it keeps everything together and we have a little screen shot of the tool itself in action. So you can see that, well maybe because it's really small but there's columns in there for all the things that you might want to have in a citation such as the title and things like that and then there's also a geometry column. So this is sort of the workflow for literature mapper how things work under the hood. Everything is stored in Zotero. We use the API from Zotero to generate a citation and then we use Python to parse that into a table for QGIS and then continue using Python to generate the locations which you digitize by hand but it's still better than you can't automate this process. It needs to be done by hand. So we use Python for that because that's what generates the interface and then once you've generated your locations we send it right back to Zotero with the API. So that's sort of the general workflow for this particular plugin. So a little bit of background on literature mapper is that academic journal articles are often about a specific location. So things like plants and animals, geology, social justice, history, all of these things happen in a location and that location actually matters. So you know I'm a biogeographer by training so things grow, plants grow in a certain place for a certain reason and that location can be really important to understanding any information that I generate with research about that particular species. Just for an example. So we might actually care about the spatial distributions of the studies that people are publishing. All of this academic literature, maybe it matters where the studies are generated. Maybe it matters where certain aspects of history are taking place and we actually want to see that on a map. But there's no way to do that. There's been no method to connect location data and your citation manager. And that became a problem for me personally when I was working on my dissertation. So our team here identified a need that had no available solution and that's really key to what we're going to be talking about today. So again we identified a need that didn't have a solution. So we're not solving something that already has a solution. You've got to come up with something that is new. So our building process in terms of creating this plugin is we started with an idea. The idea actually was this piece of paper which is a map, a really ugly map that I printed out on a printer many, many years ago and started marking locations of studies on it in pencil. I have to confess that I have kept tabs on this piece of paper until last year. And now all I have is the scan. I don't know where the actual piece of paper is. It's probably in my physical office that I haven't been to in almost ten years or two years, sorry. So anyway this started out as pencil annotations on a piece of paper. And we decided that we need to make this actually a tool, right? So we started prototyping it. We made a proof of concept of a plugin that kind of did what it needed to do and we kept iterating on that until we had code that was ready for the QJS plugin repository. And then we keep developing and we keep adding things to it. And now it's a tool that's actually gaining steady users even though it's still obviously could, you can always make stuff better, but people are actually using it for its intended purpose and that's kind of cool. So it's been through this whole stage from a piece of paper with pencil drawings on it into an actual valid research tool. So next up we're going to tell you about how to prepare for this, how to get going on making your own plugin. All right, so there are a few really important key things when you're thinking about making a new tool and a new plugin. And the first really big one is when you identify a need that you want to fill, you have to be very specific about what that need is and be very clear about what does not quite fit into that because you're going to come up with lots of ideas and they might be related to your core need, but if they're not really critical, you want to put them aside and make a list for later so that you can actually get your tool working. All right, so then the first thing you want to do is not even identifying a need, you want to look at what already exists that's similar or related and how closely does it solve or not solve your problem. And then if you do find some things that are similar, you really need to try them out and see do they really not work for what you're trying to do? Is there really a reason why your particular way of doing or solving a problem is uniquely different that it requires a whole new tool? Or is there an option to maybe just add a feature to an existing tool or enhance an existing tool to make it also meet your need in addition to whatever else it was already doing? Okay, and then once you've decided you're going to make your own tool or your own plugin, think about the branding for it a little bit because you want a name that's unique and you want a name that clearly aligns with what your tool does and you don't want that name to be similar to anything else, whether it's a related tool or not. You especially don't want it to be a name that's similar to an unrelated tool so when people are searching the internet, they find things that have nothing to do with what you're working on. But you also want to avoid, as in one problem we discovered, which is there's a very similar tool out there called Journal Map, which we came across a few years after we started, that does something similar but not quite the same. And the names are somewhat confusing and easy to mix up. And so now we have to work harder in order to differentiate our brand and our plugin to make it clear what our tool does versus what their tool does. And at least the good thing is now that we have communication with the other development team, we can jointly do some of that. All right, the next part of this is we're going to encourage you to make it happen. This is definitely something you can do. So some of the tools in my arsenal for getting going and developing a product that actually functions and does what I want it to do is I often will just draw things out. I actually have a notebook, you can see a screenshot or not a screenshot, but a scan of my notebook here in the corner. Draw out your workflow. What should it do? What pieces of information should move from one part of the code to the other? What should the interface look like? So in this picture you can see I've actually drawn out what I want my interface to look like before I actually started putting it together. And then you can see I've got some notes about what the button should do and what the code needs to look like under the hood. And then I've actually coded it and created the actual interface. And this is a screenshot of the functioning interface that does something very similar to what the notes in the notebook have. So I like to draw things out. And it can be a really helpful way to see like, is this really what I want it to do? I don't want to spend time building it until I know that it's what I want. And I'm on the right track. The other thing is this might be the scariest part. You probably are going to have to level up your skills. So there are a lot of free resources out there to help you learn what you need. For QGIS Python plug-in development, you're going to need to know Python. You're going to also need to know PyQGIS. That's the Python tool that works with QGIS. There's also information, a lot of good information in the documentation about the QGIS plug-in development workflow. So how you actually create a plug-in and what are all the bits and pieces that you actually need to make that go. And then also you'll need to learn how to use something like GitHub or some other version control system. And you really want that because that will help you. It's got great tools for not only versioning your code, but also tools for in GitHub's case in particular, there's things like the issues board where you can keep track of ideas and things that need fixing or you can use branches and things like that for development. It's just a really helpful tool to have. But use whatever version control system you like. I just highly encourage you to use one. And then specifically for literature mapper, we actually had to learn how to use the Zotero API. So they have an API that they use to work with various tools that are associated with Zotero. So we could take advantage of that. And we had to learn that system. And it's not super complicated, but it was something that it takes time to learn and you have to be open to learning these kinds of new skills in order to make your plug-in work. And whatever plug-in you have in mind, you'll probably have something similar, some domain knowledge that you'll need in order to make your plug-in work. And you can see my notes here. Again, a clip from my notebook of trying to learn the Zotero API and making notes about what I needed to do. And I just find that helpful to have it on paper. But use your own system, whatever works. And then the next piece of advice, while you need to level up your skills and all likelihood, don't get stuck in the preparation phase. Learn the basics and move on because you could spend forever thinking, oh, I'm not that good at Python. I need to learn more. I need to learn more. But the truth of the matter is you're going to learn a lot as you develop your plug-in. So learn the basics and get moving on making your plug-in. You definitely don't need to be good at Python to write useful code. It doesn't have to be pretty. It just has to work. So don't stress about that. There's always going to be someone that tells you your code doesn't look right or it's not structured right. Ignore them. Just write code that works and you'll be fine. The other thing is to start small and add incrementally. So don't try to do everything at once. Don't try to make all the code at once. You're going to make yourself crazy hunting down bugs that way. Start with something like a blank plug-in that doesn't do anything. It just adds it to the QGIS. Great. You've done something. Then maybe add a blank interface. And then once you get that working, add a button and then make the button do something and just keep adding piece by piece by piece until you get what you want. And then again, I've got a screenshot here of the plug-in itself and a clip for my notebook of outlining what this interface should look like just on paper and then making it happen. And then some notes about what are the variables that go into each of those boxes. So you can see that this is totally possible. It starts on a piece of paper and it eventually becomes code. So the next part of this is telling people. So when you're working on these plug-ins, it's all great and fine to do it for yourself. But the biggest benefit you can get is by sharing it with others. And this is just the underlying principle of open source community that if you share with others and they share with you collectively, we build all sorts of great things for each other all the time. And so one of those things is you have to let people into the process. You can't just hide your code and keep thinking, oh, I'll release it when it's ready because the one that's ready for you may never come. And so we encourage you to make an online repository. In this case, we use GitHub. And then we release the alpha versions of the plug-in to the QGIS plug-in repository. We're doing market experimental and just so people were aware that it wasn't a polished product. And we made a really straightforward one-page documentation website and put that up and put it out there. And that was one of the biggest first steps in making this something useful is that now there's an opportunity to get feedback and for more people to test and the more people test, the more chance you'll find bugs that need fixing and hopefully bring in other people to help with making it even better. So then there's some really good ways to make all of this public communication better. And the first key one is write some documentation because people need to know how to use your tool. If they don't know how to use your tool, it's not going to gain any traction because they know how to use it. And once you've got some documentation and it doesn't have to be a lot, just the basics, tell people about it. Put it on social media, write a blog post or make a little web page about it. Go to a conference like Phosphor Genie and tell people about it. And if you're an academic, write an article. So the last thing we want to leave you with is the encouragement that you can do this. This is something that I think most people could do with the right skills. So I just want to encourage you to give it a try. Make your own plug-in and see what you can do. And so with that, I guess we're ready for any questions and we're actually going to be live at Phosphor G to answer questions now. So thanks and I hope you do give this a try and let us know how it goes. We'd love to hear from you. Okay. Thanks a lot. That was really cool. And you're both on mute. Alex, there we go. That was cool. I loved the collaboration. You both had some really great advice there. Very encouraging too. So thanks for that. So yeah, let's jump into a few questions. So the first question here is in your plug-in, what does location mean? Is it author affiliation, study area, or editorial location? That's a really good question. And the answer is you get to decide. I would encourage you to make sure that you make that decision upfront and not mix your geometries so you know what they mean. You could probably even, in Zoteri, you can group things into like full-time. Folders or groups of citations. So you could decide for each one of those folders what location you're marking. But yeah, you could make it anything you wanted, actually. Okay. We've got a couple of people asking about tutorials. So do you have any suggestions for good tutorials for studying the basics for developing QGIS plugins or any links? Yeah, so we mostly use the documentation that's in QGIS itself as well as IQT documentation. And then with all QGIS plugins, there's a great plugin out there actually called Plugin Builder that creates the basic plugin for you that you just have to add stuff to. Use it. Don't make all the files from scratch yourself. Have it build the template out for you and then just start filling in things. Yeah, I just dropped the link in the chat, which is probably a little early, not synced up with the feed, but there's the link there for the QGIS Developer Cookbook, which we made a lot of use of. It's got good explanations. So yeah, I would check that out. I think that was useful. I know somebody mentioned that a lot of tutorials aren't free, but this one is. Right. Great. Okay, cool. So this one's asking about the built-in QGIS Python console editor feels extremely limiting compared to other IDEs such as VS Code. Do you have any tips for effectively developing the plugin within QGIS? I don't think that even existed when we started writing the plugin. So we use whatever code editor we want outside of QJS and then there's another plugin that automatically refreshes your code as you're working on it. So you can make edits to your code and then you can hit the refresh and use your plugin right away in QGIS. And then there's also profiles in QGIS now, which lets you switch between, so you can have the release version in one profile and the developer version in another profile and you can toggle between them to see if you broke a feature or what the difference is and how they behave. Very cool. Here's one. Do you envision having a mentally or is that mentally plugin someday? What made you go with Zotero over mentally? So the honest answer is because I use Zotero. So that's what we started with. Also Zotero is really well documented and it's open source as well. Alex, do you know if Mendeley is open source? Heshigan said no. That's what I thought. So Zotero is another open source tool. So that's the main driver and the reason why I was using it in the first place also because there was problems. I used to use N-Note, but there was problems with a change in version that was going to cause problems. So that's why I originally switched to Zotero. So again, Zotero is open source. The API is really well documented and the fact that you can add things to your account with the API was a big driving factor in what made this work. So that's why we chose that. However, if you go to our issue tracker, we do have an issue for developing this literature mapper tool with Mendeley and other tools. So if anybody has experience and thinks they can contribute to that, we'd be happy to have help solving that particular issue. But right now, I don't have the time to do that, but we'd be happy to have help trying to figure out how to do that if anybody has the time and skills. Okay. Here's changing gears a little bit. How do you get over possible this is not good enough? Feelings and thoughts? That's a really good question. I don't know that I've overcome it. I have complete imposter syndrome this morning. Sitting here possibly because it's just before 5 a.m. Pacific time, but also you never feel like it is. You just put it out there and see how people react to it. And you've got to be brave. Just do it. You'll never feel like it's right. You'll never feel like it's good enough. But just try, see what happens. I think you'll be surprised. People, especially in the open source, you know, Geo community are way more supportive than you might expect their general world to be. So, I just, it's okay. Do it. Super supportive community. Yeah. Here's a, probably this is a good one to finish up on. How did you end up working together on this plugin? That's a really good question. We actually have been, how do you say this? Essentially, we're married so that works really well. We are in the same house. We are always working on stuff together, like helping each other with things so that helps. Right. Okay. Well, that, is there anything else you want to share before we wrap up? I guess just encouragement. Like, if you're thinking about doing this, just jump in. Try it. You know, give it a go. Yeah. Okay. Well, cool. Thanks so much for that wonderful talk. Really appreciate it. And yeah, thanks again. Thanks everyone. Okay.
|
This is the story of how one QGIS plugin came to be and the lessons we learned along the way. The Literature Mapper QGIS Plugin was created to fill a need: academic journal articles are often about a specific location (this is common in fields such as ecology, archaeology, and history), but there was no method for connecting location data to a citation manager. We built the tool we needed from the idea stage (a paper map hand annotated in pencil), to prototype, to freely available code in the QGIS Plugin repository, to a tool that is steadily gaining users. In this talk, we will describe our experience developing the Literature Mapper QGIS plugin – all the ups and downs – to explain the process and encourage more people to try making their own plugin. Michele Tobias and Alex Mandel both hold PhDs in Geography and have worked in the geospatial field for 20 years, mainly working with open source tools. Both are active members of OSGeo. This goal of this talk is not only to give people practical skills and advice, but also to offer encouragement to folks who just need a little boost to try something new. Authors and Affiliations – Tobias, Michele (1) Mandel, Alex (2) (1) University of California, Davis - UC Davis DataLab (2) Development Seed Requirements for the Attendees – Documentation: http://micheletobias.github.io/maps/LiteratureMapper.html Repository: https://github.com/MicheleTobias/LiteratureMapper Academic Publication: https://journals.sagepub.com/doi/full/10.1177/11786221211009209 Track – Software Topic – Data collection, data sharing, data science, open data, big data, data exploitation platforms Level – 2 - Basic. General basic knowledge is required.
|
10.5446/57285 (DOI)
|
Hello everyone, welcome to first for G 2021. This is Jumanjua Castro in the morning and next we will be having Michael Montanay presenting Save Stack for non-save maps. So it's a joke Michael. Hello everyone again and welcome to the second presentation. This is way less serious than the previous one. This is the project starts from a collaboration with me as Open History Map and a team of, in Italy we call them Ludo Teca, it's a library with board games. So we end in collaboration with friends from university and the idea was to explore the concept of non-serious maps. And well, everybody started from somewhere in its passion for, in its passion for, thanks. It's, everybody started from a certain point of, of, in its passion for maps. For me it was probably the map of Zelda that made me love the inversion of the transformation from the specific world where we were in a specific block in our, in our map and in our, in our form and up to the global view of the map where we had an overview of the situation of where we had to go and how to get there and we had to plan how to move in this, in this world. And it was beautiful because it was simple and at the same time full of already a narration. More recently there's been a beautiful game-ish thing that is a 9-1-1 operator which is, which downloads and uses the data from, from Open Street Map to have the user, to have the player deciding and managing the whole, a whole team, a whole fleet of cars, motorbikes and delivery of trucks and so on, on a real map where randomly events happen and things happen and the player has to choose and has to make decisions based on space. Then came the lockdown and during the lockdown we had our, obviously the, the, the, the board game house was basically, was completely closed and we had to stay at home for one and a half years. We started reopening a few weeks ago and during this lockdown, Asmadi, the, the, the game editor started publishing parts of the games for remote play and so we started thinking why, well, why, why do we have to, to, to rely on just remote play on just an image, a PNG file to, to work on a map if we have a real map of the world. For example, this is a beautiful game Sherlock Holmes investigative consultant and all of its expansions and the, the, our idea became why can't we just transform this into an interactive thing for players to work on, to play with, to interact with the game that they already love because the game itself is really beautiful and favors the exploration of the world. And so we started just digitizing the, the, the maps as you can see here and placing them correctly in, in, on London and it's beautiful because the maps are perfect, almost perfectly alignable beside being a slightly, slightly more artistic and having some of the elements made more for, moved for artistical matters. And then we started collecting also additional elements, the, the, the pages of the New York, of the London Times that were added to the, to the, to the infrastructure, to the, to the set, the, the pages of the, of the in the, we call it a noir in English, it's the guzzeteer of the, of the, of the, that is already in the game and it's all publicly available imagery. And then we started saying why can we just add additional elements? We can, why can we just add the points to the, to the, to the map so that the player can not just play with a, with an element, with a explorer, a location, but also get into the location and see the Wikipedia article connected to the Diogenes Club and in eight S.O. or the St. James Palace or the church or and so on and so forth. And so it became from just a set of files dropped on a, on a server by an editor. It became, it became something more of an a way to explore the world. And so a few, several players, several players of our, our games, library started using this platform and other fans, other friends as well, started using the platform and are using the platform to do remote plays whenever they want. Because again, the game is very interesting and requires interaction between players, but not physical, not, and not anything that requires pieces of board, cardboard to be moved on a, on a table. But it's also, it's all based on interaction and decision making. And this is beautiful because we had the ability to help them live a better period during their horrible, during the complicated, not horrible per se, but complicated period that was the, the, the long lockdown. And so it's, it's a, it's a, it's a complicated period that was the, the, the long lockdown. The beautiful aspect is that obviously as a side effect and as a fan of historical maps, on the same map, the Charlotte, you can see in Charlotte, justplaybow.it, you can find also a map of another part of the world, which is not covered by the games, but we were starting to see if, if it was possible to locate and use other places as well to be able to, for example, get local narrators to expand on the concept of the game and work on that. Anyway, as a result of this, we had created a small experiment where we have people starting to play with maps and with these kinds of concepts. And so we started discussing with, we're all fans of roleplay game, RPGs, DND and so on. And so we started thinking, why can we just use the technology and the, and the stack that is behind Open History map to do fantasy maps? And so we started working on a small site project that is Open Fantasy Map. RPGs and more based on public domain and for use maps, based on custom designs made by artists, for example, we had, and where we had contacted, we have contacted and contacts with a few artists that already create maps for RPGs and for major editors. And we wanted to create a tool to help the maps, these maps to be able to navigable by the users, customizing by the user and transformable by the users. This is the most interesting aspect because given a moment in time and given a, and given a, an instant in a gaming world, we want to be able to have a user split the timeline from the main well, established time to have his players become part of the, of the events of the, of the, of this world. This is one of the main issues that we, we discussed that was interesting in our exploration of this, of this aspect. And so we could have a more open world where players have real impact. If we, the players want to mine the whole mountain, they can do that somehow. And this has impact both on pen and paper, where if they use a digital support to represent the map, as well as on massively multiple RPGs. And again, here is the architecture from open history map that we wanted to, to bring into open, open fantasy map. And to be honest, most of it is absolutely, is fine. The only element that we absolutely don't need is the historical street view because obviously there are no photos of either the Farron or the, the, the, the, the outer realms of Tulu or the, the worlds of Numenera. So it's not a matter of this drawing elements. It's a matter of having an overview of the place. This goes, and this means that we have the map support. We have the tile server support. We have the, the import support. We have the caching supports already available. We have the event index already available, which is a mess because obviously the events in a fantasy world are way more scars than, and sparse over time than in a real world, which means that, and this brings us to a fascinating side effect of this, of this gaming applications of map data. It is that these worlds are defined and empty. They are completely empty. There are, it's a common meme in, in the DND world that everything that happens happens in the sort goes, happens in one very small part of the, of the world, which is quite sad on one side, but it's obviously, obviously inevitable on the other one because it's a world, it's a whole world defined by a very small amount of artists and of writers and of content creators that define the description of that world. And this is, and we are seeing this side effect because there is an enormous precision in the way we describe a specific city, Silver Moon or, or Bergost or, or, or, Waterdeep, but it's absolutely vague the way we define the city that is miles and miles away from there. On the other hand, we, we have the map in the, the, the, the map, the open history map index that we can port in as the source for the DND or the source material for any kind of role play game rule set. And on the other hand, the, the data, data collector, which is a, it seems out of context, but in fact, there, there has been a lot of detail, there has been a lot of writing and details about several cities in several, the DND games, for example, and several, play gaming environments have defined a lot of detail on some aspects. So we want to bring that in as well. And as a result, we are working on it. So, so be kind. We're working on it, but we already have a general map of the Toriel with the detailed map of the Sword Coast, Waterdeep and the Silver, and the Silver March and Dale Lens and Cormier, the small city of Bergos. And we have several other cities with various levels of detail. Why have we started from here? Because we're, we wanted to play an adventure in, in Farron with, with the DND, but the real, another reason is that because it is possibly one of the best possible test beds for a setup like this. And on the other hand, because we have almost a lot of detail over a long period of time. So it's, it's perfect for defining events over time and changes over time in the world. We have the, the desert in the middle, the city of shade with all the desert and all the events that happened around the whole, the city. And so the timeline of the, of this world is well, very well defined and very well described. And so from this experience, we're trying to also recreate the maps around it with the, with projects with, that uses, using several elements. Some of the elements I already defined, the dimension for open history map, we're working on forking the timeline. And we can fork the timeline and have events that the players, we can say that a given event is the last one that is from a main timeline and from then on, the players have complete independence. That means that they can, that while we, we, we saw in, in an open history map, the movement of a ship, we can track them precisely. If the players were to play pirates and they placed themselves in the track, in the, in the track of that ship, they could in game encounter that ship, that that ship would be near to them and they could interact with it and they could decide to destroy that ship as a side effect of RPG actions. And that means that from that point on, that moment on, all the events that on the main timeline were defined connected to that ship, they will not see them. They will not be able to have side effects from those elements. And this means that, and this is one thing that we want to work on with as open history map, we need to understand the concept of ripples of an event over time and the impact of an event over time. And this is the reason why we want to use the same infrastructure as open history map. On the other hand, we have game events where, as I said, the interactions with the, with the map can create predefined modes of interactions with the world, meaning explosion, building objects, building, creating a building, creating something, altering the world in any form the, the characters might want. This has an enormous impact on the concept of open world and opening up a world. And this is possibly one of the main things that we want to work on in this, in this secondary aspect. This is not the only one because on one side, the digitization where process is in the works. And the, the, the, we're using the open history map public history toolkit as at least as a prototype of that. And we want to create transformation tools for map design platforms. And we obviously, we need to define new ontologies because open history map, open street map and open history map and digital humanities ontologies do not cover exactly the concept of magic, of specific races, or other kinds of elements that are very important in fantasy worlds, in high, high fantasy and, and the whole RPG world requires. On the other hand, we can, we want to create a tool for, for, for map designers to transform their vectors from Photoshop to draw their things on Photoshop as they do currently into something vector based to be easily transformed into something digital, into something interactive and explorable and integrating with, with standard tools to give better open world experiences to users, meaning something like role 20 or other elements that rely on a background that is defined. We want to try to integrate into that and offer open role play game worlds as an alternative, all with open source software. Whoops. So, well, that's it. Thanks. If you have questions, I'm here. Thank you very much, Michael. It has been a very fun talk. We have one question, one question, which I will be copying and putting a banner. So, there it is. Cthulhu. Cthulhu, interesting. We, we, I honestly, it depends in Italian, I would say Cthulhu. In English, I always heard Thulhu, but I don't know. Yeah, in Spanish, we tended to use Cthulhu or Cthulhu, I got seen Cthulhu. It's very interesting how everyone penises their same word differently. It's pure chaos. Well, we don't have any more questions. If you want to, you want, you can leave your questions in the question tab. I will leave a few minutes and you know we will be finishing and setting up the next speaker. Well, it seems like there are no more questions. We are just having a lot of discussion in how to pronounce Cthulhu. Whatever. So, thank you very much, Michael, for your two talks and, and he left. Whoops. Well, thank you very much, Michael. Okay, he's back. Sorry. Yeah, sorry. My internet suddenly failed somehow for a few seconds. Don't worry. And look at it. We have more questions now. So, sure. Next one would be, is there an open leaf for assessing the open fantasy map? Is there an open? If you have the link, I can paste it in the chat. Oh, yeah, sure. Yeah, it's, it's still, it's still a work in progress, but I will send you the direct link to the tutorial map maps, map.fantasymaps.org. Here. As I said, it's still work in progress. So, yeah. Okay, I will paste it and show that we will be taking it with a little sign. It is very interesting. Okay, we have time for one last question. How do you determine what projection to use? Well, it was in a very simple way, badly. No, honestly, it depends on the, on the structure. Basically, we have, we have reconducted the whole projection, per se, to the, to the fact that, for example, Toru was described as a world that is almost the same size as the Earth. And usually the planets defined by fantasy maps are worldish, earthish as a size. And for that, for example, with, with, with the pharon, we are defining everything as, as, as a PSG, as classic, with just 84. This is not true for all of the maps. And for some, for some maps, we have, we are discussing how to do that, because we have, for example, city maps, because all the events happen in just one city, one defined city without a world around it. And it's, it becomes complicated, because honestly, it's, it's, it's a mess. If it's just a city, personally, I usually use WGS as well, and put that city on New Island, and work on that, and we can rely on the fact that it would be more or less precise. And basically, the, the, in other cases, where the situation is more complicated, I honestly think it's, it can be nice, it can be useful to define for ways to transform from specific strange projections to the ones that we're using. The complicated part here is the fact that these maps are defined by artists, and artists are amazing people that have no clue on how Mercator maps, Mercator breaks the North and South poles. And so we have beautiful maps that have the same ways to look at the world that we have, but are artistically fascinating and place cities in a North that doesn't exist, or in a South that doesn't exist. So this is why sometimes when we look at strange maps and beautiful maps, something strange happens in the poles, but we don't quite get it. And this is, for example, why I personally appreciated somehow the, the, always appreciate the fact that in most fantasy worlds, most things happen on the equator, because it's obviously way more easy to, to manage and to transform. Of course. Okay, we have found out of time, I had another question, but I think we will have no time. So thank you very much, Michael. If you have questions, you can write me, you can send messages, anything, I'm here. Okay, perfect. I'm sure that people will be able to contact you. So thank you very much for your talks, Michael. Thank you. We'll be seeing you in Phospoji, and right. Absolutely. Thank you very much. Thank you. And bye bye. Bye everyone. Okay, and next we will be setting a Luis Calisto for the rest that will be starting at half past. So, hi Luis. Hello. Hello. It seems the sound is working fine. So, okay, we will be waiting until half past to introduce you, maybe one or two minutes before, okay? Okay. Okay, see you soon. See you soon.
|
Open Fantasy Map uses the technology stack of Open History Map to store data about fantasy worlds for role players to live their adventures in the best ways possible. The platform displays fantasy maps created via generative algorythms as well as digitized maps by artists. This also enables, thanks to the OHM technologies, to "fork" the world and have the players impacting and changing their world. Open Fantasy Map uses the technology of Open History Map to store data about fantasy worlds for role players to live their adventures in the best ways possible. The platform displays fantasy maps created in several waysa: On one hand it uses generative algorythms to create continents, countries, cities, streets, biomes, and many other elements for the players to interact with and to manage. On the other hand, it uses the tools created for Open History Map to digitize ancient maps to help artists digitize their own modern productions in order to make these maps interactively available to the players. The OHM technology stack enables also th eforking of the world into several different situations where the players can impact their own sections of the world. We want to share the experience in creating both the genrative algorythms as well as the platform. Authors and Affiliations – Montanari, Marco (1) Gigli, Lorenzo (2) , Taddia, Luca (3) (1) Open History Map (2) University of Bologna (3) Just Play Bologna Track – Use cases & applications Topic – Data collection, data sharing, data science, open data, big data, data exploitation platforms Level – 3 - Medium. Advanced knowledge is recommended. Language of the Presentation – English
|
10.5446/57287 (DOI)
|
Hello everyone and welcome to the morning session of Cordoba Room. We are going to start in a few seconds. The talk is about the state of GeoServer, a bit of introductions. Andrea is open source enthusiast with strong experience in Java development and GIS. His personal interests range from high performance software, huge data volume management, software testing and quality, especially data analysis algorithms. His full-time open source developer on GeoServer and GeoTools and received the Sol Calza Award in 2017. Jody is an open source developer and advocate working with GeoCat. He has over 20 years of experience consulting, training and building solutions and guiding technology development. Jody is on the steering committee of GeoTools and GeoServer and JTS projects and volunteers as chair of the OSGEO Incubation Committee. Thank you guys for joining and the floor is yours. Thank you. All right. Here are the slides. Jody, go. Welcome to the state of GeoServer, we're going to go really quick because we've got lots to say. We just had an introduction. Thank you very much for that. Heading on to our next slide. GeoServer is at a glance a Java web application really responsible for publishing your data, using as many protocols as we can get our hands on. The core project is responsible for publishing and we've got lots of optional extensions and bolt-ons, many of which we'll talk about here today. Just a community update on how we're doing. We've got our core project steering committee and a relatively small number of committers for this size of project. We would like to welcome Marco as a new core contributor this year. We also revamped our GeoServer service providers. As you know, we've got our core contributors responsible for the ongoing health of the project. Our experienced providers who you can hire and have a shown history of being able to contribute back to the project. We also have just additional service providers. The big change is we've had a policy change where we're formally recognizing these organizations for the roles they play within the community. Move it on. In terms of our infrastructure, we're making use of everything we can find. We are still in the middle of transitioning some of our infrastructure from boundless to OSG or hardware. In terms of community modules, we've got an R&D area called community modules. We've got a large number of incoming projects. Some of which will be highlighted in this talk, some of which will won't. A few of the community modules are actually outgoing. ArcGIS data storage and production, as is the WPS script module. In terms of GeoServer releases, it's kind of a rule of two. We have a stable release and a maintenance release. We're just in the process of releasing GeoServer 2.20. This talk will primarily cover 2.18 and 2.19, which are active at this time. Just at a glance, 2.19 was released in March. 2.20 should be released in September. I think we're a little bit busy with this project, so it might happen this weekend. The way in which GeoServer releases work is we've got stable for six months and maintenance for six months, giving you a chance to upgrade. Please update. It's the only way you get the latest security fixes and so on. As you go through these slides, please have a look at the key at the bottom. It highlights the individual responsible for the work, as well as their customer or the sponsor of the work and the version that the functionality is available. Let's have a look at the new functionality available. We're talking about mapping. One new functionality that we added in SLD is a new vendor option called inclusion that you can add to feature type styles, rules, and symbolizers to use it either only for making the legend or only for making the map. This allows us to produce more readable legends without getting in all the technical details that are needed to get a particular appearance on the map. Here is a very simple example. In this top-states map, we have a boundary rule that really adds nothing to the comprehension of the legend. We can mark it as a map only and it will disappear from the legend, making it simpler and easier to read. We added the ability to support the color map labels in Get Feature Info. When you click on a raster that has been classified with a color map, now the label that classifies it can be included in the output. Here we have an example of the label being added to Get Feature Info JSON output, but it's going to show up in all of them. We have a number of modules that graduated from Community to Extension. MapML is one of them. It's a proposed extension to HTML which pretty much does what the video tag did for deep videos many years ago. Many years ago we had many ways, technical ways to play a video. Now you just stick a video tag and you're done. The idea is to create a map tag that allows the browser to just display a map. There is a little service under your service, LashMapML, Powering.generation. Some browsers already have support for this, such as Firefox and I think also Chrome, but I'm not sure. As you can see by just putting a map element in the HTML, you get your simple map showing up without any further effort. In terms of services, as I said, we added a number of graduations. We have the WPS JDBC module graduated. It's a status store. Status store stores the status of asynchronous requests so that they can be shared across a cluster. You have NGEO server. One is doing something. When a GET status request comes to another node, it can go to the same relational database and get the current status. The WPS download module graduated, the WPS module allows users to perform a large extraction of raw vector data, raw Racer data, large maps and animations. It is being used by both MapStore and GeoNode for large extractions. The WMTS multidimensional module also graduated. It extends the WMTS with new operations that allow to drill into and explore the dimensions of a layer. Here in the screenshots, we are seeing MapStore timeline extension using it to figure out how many items we have on the timeline and update it as we zoom and pan around. The Params extractor also graduated. The Params extractor solves a problem that you have when you have a shrink-rupt desktop client that supports only basic OTC protocols but you want to use GeoServer vendor options. You can bake the parameters, the value of certain parameters in the path and have the capabilities document backlink to get map, get feature info and so on using those doctored out path. The client ends up using the vendor options without even realizing. Here is one example of echoing a parameter. So if we do a get capabilities with SQL filter and a certain value, it's going to be echoed in all the backlinks. The example on the other side shows how to put a parameter in the URL so that it gets expanded into an equivalent SQL filter. In terms of GeoWorkash, we got a couple of news. The GWC S3 extension finally graduated from community to extension. It allows GeoServer and GWCash to put tiles in S3. We have support out of the box support for the OTC two-dimensional tile matrix set, which is a specification providing a number of well-known tile matrix sets. We have some examples of names here on the right and a show of how one of them looks. It's interesting to see that the name of the tile matrices are just numbers which is going to make the Mabox clients happy since they do expect the Z to be just a number and not a generic string like WMTS supports. Jodi, is that you? Sure. In terms of configuration improvements, the big news is going to make my customers really happy is internationalization. So this allows you to provide translations for title and abstract and contact information. And when you request to get capabilities with an accept language, the capabilities document that's produced will be matching up the local requested with the information in GeoServer and producing you a capabilities document, a description of the service in the language requested. So this is really important for inspire capabilities. There's also been a little bit of creativity in order to provide internationalization support for the styling for SLD. So the title can be localized as you can see above, but there's a little bit of a challenge with the functions. So there's a new function name called language which will return the requested local. And you can use that dynamically in your styling in order to map the local to the specific property or literal string you need to look at. And so here's what that looks like on the fly. And you can see the function being used. And I think there's an example on the next slide. So you can see that the language being requested in Italian will come back with an Italian legend or in English will come back with an English legend. So this is going to be a really, yeah, people have been looking forward to this functionality for a while. And the same functionality is available for the raster legends. In terms of community building, just a reminder, if you do find a vulnerability, please follow our responsible disclosure policy. I also wanted to give out a shout to Ian Turton. Ian Turton has been working on fixing a lot of the reported security concerns. So we are hearing you. In terms of developer participation, we are still concerned that GeoServer relies on such a low number of people. So we're actively recruiting developers. Please stop by. We do want to continue to reach out to downstream projects specifically to test release candidates and catch regressions. And as mentioned earlier, we've revamped our service provider pages in order to reward service providers who are actively participating in our community. And we're also going to experiment with a cost recovery model for CodeSprints. We've seen this practice in other OSGO communities, and we'll give it a go and see if helping to cover costs can encourage sprint participation. All right. So now we are going to talk about a few models which are now in community. And so they are coming soon to a layer group near to you. Sorry, a GeoServer near to you. This one is about the layer group styles, which is actually going to be core, not community, and is going to land in 221. There's a potential backport to 221, but it has to be discussed. The idea is to have the same layer group show up in different styles, and you can decide which style to use during a get map, and the styles are going to get advertised in the capabilities document. This is an example from Ordnance Survey MasterMap, which is made of six layers with four alternative styles. Also in 221, we are going to do a tweak of the symbology factory so that you can decide in which order they are played. If you are doing maps with lots and lots of points, the order of lookup of the point symbology providers can affect performance in a very significant way. And this is a map that went down from seconds being taken for display to a fraction of a second. And it's the position of all the ships in the European seas made by Emsa, the European maritime security agency. We are working on the Windows installer, and Jody and Geocat are working on that so that we can give the Windows users back the simple installer experience, and it has been done using a small contact. Also for Geocat, we have the welcome page layout, which is going to land in 221, which is going to change and be somewhat more aligned to what we see with OGC API as pages. Also, in terms of community modules, we see the appearance of the Cloud Optimized GeoTiff Community Module, which leverages COGS structure to do efficient reads over HTTP and S3 storage with more backends incoming. We have the OGC API Community Module that keeps on growing with more and more APIs. We have significant improvements in the GeoPackage production for producing large GeoPackages, but also to include the styles in the GeoPackage, generalized tables and whatnot. We have a presentation about it, I think, tomorrow, that covers all these improvements in detail. We also have a DGGS Community Module, and there is a presentation today covering all this work, so I invite you to join and figure out what these DGGS is. We have a Features Templating Community Module, which is a pretty interesting, it's a templating system to generate a JSON, GML and HTML documents that allows you to customize on the fly using these templates the output of WFS and OGC API features request. We also use it in Stack and Open Search. Stack and Open Search are the community modules dealing with the searches of satellite imagery, and they have been in heavy development during the last year. Well, Open Search was actually there since 2017, but it got improvements and the Stack API is new for this year. Here is a bunch of screenshots from an implementation made by DLR, where the templates to produce the HTML and also JSON have been heavily customized to adapt them to their use case. We also have a module called Smart Data Loader, which is going to be welcomed by people that do a lot of complex features. Say that you want to do complex features, but you don't care about targeting a particular application scheme, you just have your own particular model that you want to expose as complex features. This module basically hits your database, figures out the foreign keys and builds for you an app schema mapping and a target access scheme for you. So you are basically ready to go within minutes instead of spending hours or days setting up all the mappings. For those that instead cannot suffer having schemas and restrictions, I invite you to have a look at the Schema Less Community module. It's basically a data store that breaks the usual assumption that everything has to have a predictable schema. The data source in this case has to be MongoDB, so we got the JSON documents stored in MongoDB. It's going to work out of WFS and OGC API features, but only for the GeoJSON output format because the others require a schema in order to work. Finally, we're working on the official Docker distribution and we welcome input on how to best make it. Right now we have two horses in the race, either by one by GeoSolutions and one here by Nils, and we are trying to pick the best out of the two proposals. Finally, Cloud Native GeoServer by Camp2Camp and Gabriella Oroldan. It's basically a repackaging of GeoServer using Spring Boot and microservices so that you can deploy each service on a different node in Kubernetes and manage a cluster with different levels of scalability for each service. Back to you, Jody. Thank you. Just checking how much time I've got. Site. Site is the OGC conformance test suite. GeoServer used to be site certified back in like 2010 or 2012 or 2016. We would really like to be certified again. We did put together a small contract in order to reintroduce the site tests into our build process and QA process. Coming up next, we would like to plan a sprint last Friday, every week in November, in order to directly work on passing the tests and regaining certification. We will be looking for sponsorship. We will be mostly looking for participants. Please join us on the GeoServer deval list and join us in November. Many folks assume that GeoServer is site certified and are somewhat surprised that this hasn't been kept up to date. The only way to get it done is to roll up your sleeves and pitch in. Please help out. Thanks. I think this might be the first year we've finished one minute ahead of time. Three minutes ahead of time. My timer is wrong. Well, Angelos, do you have any questions for us? You are muted. You are a mute. Sorry. So, yeah. Thank you. So, we have a couple of questions from the chat. About upgrading GeoServer, I'm currently using version 2.18, which runs under Tomcat 9. Do I lose my configurations when upgrading to 2.19 or 2.20? Yes, you do. We haven't changed the configuration on disk very much and GeoServer automatically updates if required when you're using a new version. So you don't lose them actually. Just make sure that you have an external data directory and you back it up when doing the upgrade. If you are using the internal one, then yes, during the upgrade you're going to lose the configuration. So, locate it and back it up and move it out. It is covered in the user guide how to upgrade. Yep. I don't see any other question in the chat so far. People are asking if they can get the link to the slides, but that's not a question. Eventually, yes. That will be posted and the video will be posted eventually. Right. Excellent. All right. Just one personal question then. I saw the cloud native GeoServer mentioned. Is that going to be part of the GeoServer distribution or is this handle? It's going to be like Docker. It's going to be another distribution of the GeoServer project. I think Gabriel's got a couple of talks about it in the program that you're interested in learning more. I've experimented with it a little bit. I really like the approach that's been taken. It's broken things up into individual modules. Yeah. So, I'm treating it like another distribution of the core components of GeoServer just like we've got the Docker distribution. Well, we don't have the Docker distribution. We're trying to happen one. We're trying to happen one. Yeah. Or we can say that we have multiple because there is no official Docker image, but there are multiple available on the Internet. That's true. So, I think we covered the question that I have here. Can we run GeoServer on a Docker container? Yes. Yes. Lots of examples online. And if people are interested in participating, we can make it an official Docker distribution as well. One other question is what's the best way to manage multiple environments, testing, staging, production, using Geet, et cetera, what's your recommendation? Jody, we cannot hear you. You're on mute. I'm not talking. It's kind of like I'm on mute. Sorry. I was looking at the wrong video. I can answer this one. Yeah. We typically stick the data directorings on sort of version control and just check it out in the various environments. The one trick is that typically switching environment, your database hosts and external servers are going to change host name, password, user, whatnot. So you can enable what is known as environment parametrization to move these few values into a property file or into environment variables. And then you can check out the same data directory in the different environments and just go to the same data level. The other approach is for the database is to set up your connection pools at the Tomcat level and use JNDI to refer to them. So there's a few pages in the user guide, which are very important, but also very short. So you can't necessarily understand the significance until you've run into the challenge yourself. All right, thank you. Another question is, when will version 2.19 become the maintenance version? I know. Probably on the weekend during the code sprint. I think I'm going to try and make the release on Saturday, in which case I would treat 2.19 as the maintenance release already. The next version is going to be definitely a maintenance one. Yeah. All right. Some questions are coming up. Is there an example of I8n configuration files documented in Subware? What languages are already created? It doesn't work that way. So there's examples of how to do this in the user guide, but our default data directory is not internationalized, that might actually be a useful improvement that could be made if you're interested. So just use the user interface. I'm not sure if there was a screen stamp on how to do so. Yeah, there was. Anyways, yeah, there is a drop down. You choose the language that you want, and we are using the languages provided by the Java virtual machine, which are a few hundred. They are a combination of simple languages like just FR and combinations such as FR-CA for the Canadian, spoken French, for example. And that little checkbox next to title is important. Until you click that, you won't see all the options. You'll just see a single text. Thanks. Some last minute questions. Does your server work with styles from a Cascading WMS? Yeah, sort of. The styles from the external WMS are named styles, so you can switch between them. Also if the other WMS is publishing, the styles you could refer to them in your GIT map. All right, I don't see any other questions on the chat. So I would like to thank you for your talk. And thank you for having us. Thanks. So we have a couple of minutes before the next talk.
|
This presentation provides an update on our community as well as reviews of the new and noteworthy features for the latest releases. Attend this talk for a cheerful update on what is happening with this popular OSGeo project, whether you are an expert user, a developer, or simply curious what GeoServer can do for you. GeoServer is a web service for publishing your geospatial data using industry standards for vector, raster and mapping. It powers a number of open source projects like GeoNode and geOrchestra and it is widely used throughout the world by organizations to manage and disseminate data at scale. This presentation provides an update on our community as well as reviews of the new and noteworthy features for the latest releases. This year in particular we have a lot to cover for 2.18 and 2.19 releases, as well as a preview of the September 2.20 release. Attend this talk for a cheerful update on what is happening with this popular OSGeo project, whether you are an expert user, a developer, or simply curious what GeoServer can do for you. Authors and Affiliations – Andrea Aime (1) Jody Garnett (2) (1) https://www.geosolutionsgroup.com/ (2) https://www.geocat.net/ Track – Software Topic – Software status / state of the art Level – 2 - Basic. General basic knowledge is required.
|
10.5446/57288 (DOI)
|
Hi, and welcome to the state of Mapserver presentation for 2021. My name is Seth Govan and I'm presenting this report on behalf of the Mapserver Project Steering Committee. So in this presentation, I'm going to give a quick introduction to Mapserver before look at the upcoming 8.0 release, then have a look at the Mapserver ecosystem and how to get involved in the Mapserver community. So for anyone who hasn't heard or used Mapserver before, here's a very quick introduction. It's been around for almost 30 years and was originally created by Steve Lime, who's one of the most active contributors still today. It's got an open source, MIT-style license and it's a founding OSGO project. It implements the OGC standards and the 8.0 release is going to see the introduction of the OGC API features. It's known for being fast, cross-platform and mostly written in C, although there's been a move to C++ in the last couple of years. And going back to a previous presentation from Paul Ramsey, all of these points still apply. So it's an engine used for generating maps, hence the name Mapserver, and it's a web technology built on CGI and fast CGI, so it can be easily added to any web server technology. It's fast, it's powerful, and importantly, there's not one single company that backs it. There's developers from lots of different organizations who are involved. There are some links to some previous status reports online, and this report will also be made available online. So since the last presentation at Phosphor G in 2019, there's been some excellent news that there's been 43 new contributors to the project, and another 30,000 lines of code have been written, along with eight more years of effort. So there's been new releases of all the products in the Mapserver product suite. So in July of this year, the latest release of Mapserver 7.6.4 came out. In June of this year, there was the first release of Tiny OWS for five years, which included lots of new features and fixes. And this release is in honor of Olivia Cortin, the original Tiny OWS developer who sadly passed away last year. There was also a release last year in February of Mapcache, and this includes a new pyramid cache structure and new options for storing tiles by dimensions. Now I'm going to talk about the Mapserver 8 release. This will hopefully be coming out in the next few weeks. All of the major development in version 8 has been proposed via request for comment documents. These are brief technical proposals that are then voted on to see if they should be integrated into Mapserver. They go back all the way back to 2005. Hopefully the most exciting of the RFCs that will be coming will be seen in the version 8 release is the OGC API support. This implements the OGC API features part one core into Mapserver. OGC features can be seen as the successor to WFS, so you can access your vector data through a web API. And it uses the Noelman, JSON library and Inja template placing libraries for its implementation. In order to add the OGC features API to your map files, you need to add just two lines into your metadata section of the web object. First of all, you enable the web API, and then you need to point it to a directory containing HTML templates that will implement a basic HTML interface that you can use for browsing. Once these are set up, you can access any of your queryable vector data sets via a web URL in a format similar to this. So I've implemented it for the Mapserver Etasca demo. And then you can access it using a URL. In the root, you get a number of links describing all the capabilities on the server. So each of these can be seen either via JSON format or using the HTML templates, you can see it in an HTML format. You also get the open API documentation, formerly known as the Swagger documentation, again in JSON or in HTML. And this lists all the capabilities. Then you can also see all of your vector data sets by clicking on the information about feature collections. So this goes down a level and lists all of the layers in your map file. So these are all listed out. So it's a very handy data catalog for browsing your features. And then you can drill down further into an individual data set. Here you can see the extent of the data set. And then you get a number of other links. So you can see the collection is JSON or HTML, which brings back this page. Or you can see all the items, again, as DOJSON or HTML. So here are all the lakes and rivers in the data set. The templates implement a leaflet map so you can browse. And you also get all the attributes with paging. You can also drill down one more level by clicking on an individual feature. And this shows the actual geometry of the feature itself, along with all the attributes. Again, this is the HTML view. But if I go back here, I can get the DOJSON view. So this is all accessible through the API. As Mapserver is built on top of GDAL, you can now use Mapserver to implement the web API for any data set readable by GDAL, which is an incredibly powerful feature. Another major new feature of Mapserver 8 is the introduction of a global config file. This makes it much easier to manage multiple map files. It provides increased security and more consistent support across operating systems. So Mapserver has a number of environment variables that can currently be set either in the web server, sometimes in the map file. And in certain cases, they didn't work on certain operating systems. For example, on Windows and IAS, some of these environment variables were never read correctly. This is all corrected using the config file. And all of these can now be set in one place. Here's an example of the new config file format. And as you can see, it's very similar or identical to the map file format. So you have blocks, for example, the config that finishes the end. And within the config file itself, we have three additional blocks. We have environment variables. So again, these are any of the variables we just saw in the previous screen. So any of these can be set globally in your config file. So for example, we've got a debug level. You can set your projection library. You can also set the directory holding your HTML templates for the OGC API that we looked at earlier. And you can also limit access. So this makes it more secure so that you can't load map files from arbitrary locations on your server or on your network. You can also provide map aliases so that you can access them more easily rather than having to pass a path for a map file. So these keys can then be used within a URL. So for example, this is the key that you find in your config file. Finally, there's a plugin section. So this is something that's being noted in security audits that map files could be set up to load DLLs that might have been compromised. With the introduction of the config file, rather than putting the path to a DLL in your map file, you put a key in your map file and you put the path to the DLL within your config file. This means that the administrator can limit which DLLs are loaded by the map file. RFC125 details a new keyword for the layer object, connection options, which is added in the 764 release. This allows you to pass in a set of options when you're opening GDAL or OGR data sets. For example, if you're reading GeoJSON using GDAL, you can flatten any nested attributes by passing this option. If you want more details, then you can go to the GDAL docs which lists all the options available. RFC126 details the porting of the map server code base to use the new Proj6 API rather than Proj4. This will be fully introduced in map server 8 and allows map server to work with Proj7, 8, and 6 and any future versions. For users, there's no syntax changes, so you don't need to change your map files. All you might have to do is update your Projlib settings, the environment variable, to the newer versions of the Proj libraries. This can also be set in the new config file. Also to note is that EPSG codes are now recommended to benefit from more accurate coordinate transformations. So you should be using this format with the init equals EPSG and then the code rather than the Proj4 strings format that's been available in older map files. Along with the map server 8 release, there's also a release of brand new map script API documentation. This is now auto generated from the Python map script bindings, which means it can be kept up to date with the latest code base. We're going to have a quick look at it now. So all of the map script classes are now listed with a page for each of the classes along with the functions and all of the constants available to map script. If I jump into one of these classes, we can see that all of the attributes available are listed along with the methods. In addition, class diagrams and examples have also been added to some key classes. One of the nice features is that you can jump between the different objects so everything's linked so you can jump to the hash table object. And you can also jump to keywords in the map file. So rather than repeating the documentation, links are provided so you can jump directly to the documentation for a particular keyword. These are also available for linking directly using URLs. Map server 8 also sees a cleanup of the map file syntax. So over the years, there's been a number of keywords that have been added and are now deprecated or no longer have an effect on a map file. Removing these makes it less ambiguous of what should be and shouldn't be included and makes it easier to write passes for the map file syntax. So over 30 keywords have been removed from eight classes and in some cases, things were deprecated almost 20 years ago. To help users migrate map files from MIRN and clean up the syntax for version 8, validation can be done using map file. There's also an online version that I'll quickly demonstrate now. Or you can install it using pip install and validate a whole suite of map files against a specific map server version. This will then give a list of any errors and keywords that you might have to clean up to get your map files working in the new version of map server. On the online parser, you can set the version that you want to test against for validation and I'm going to select the older Tasker demo. And then when I try and format it, I get a list of all the keywords that are no longer applicable in version 8. So for example, the dump keyword in layer has no effect, so it's been removed in version 8. Max scale has been replaced by max scale Denom and transparent is now handled by output formats. So these are all documented within the RFC and will be and the docs for version 8 will also be updated. Map server 8 also sees a new output format, the inverse distance waiting output. This code was written back in 2014, but has only been fully integrated for the version 8 release. It allows you to take an input points layer, so based on vector points, to produce a raster output. You can set various processing options and then set your color ranges to produce different colors for different interpolated values. The background images of street lighting in county Monroon Island, but it probably has zero scientific validity. Finally along with all the new features in the version 8 release, there's been a number of important code base improvements, mainly provided by Evan Ruhl. So there's been a lot of C++ifying of the code base, which allows certain memory issues and more modern programming techniques to be used. There's been hundreds, if not thousands of compilation warning fixes, which means the builds are a lot cleaner and it's a lot easier to see any new warnings or errors when you're building map server. There's been the setup of the cover achieve scans, which allows you to check for memory leaks along with lots of fixes of these once these were set up. And continuous integration processes for map server also be improved. So now the fast CGI and the CGI applications are tested, not just the map server library itself. And the CI setup has also been migrated to Ubuntu Bionic. Now we're going to have a look at the software and projects related to map server and the different options available for installing map server. Probably the easiest way to trial map server is to get a copy of OSTO live. This includes recent versions of both map server and map cache and includes overviews and quick start tutorials that you can work along with to get an idea of how the software works. Also on OSTO live, there's GeoX and GeoMouse. These are two front-end JavaScript frameworks that make use of map server as a back end. Again, they're all set up and configured so you can start working with them straight away. If you're looking to install map server on Linux, then it's an apt-get install command that's installed on the way, thanks to the packages on Debian that are maintained by Sebastian Kauenberg. If you're on Windows, then map server for Windows, MS for W, is available and a new release was available last December. These are maintained by Jeff McKenna of Gateway Geomatics and they come bundled with their own Apache server along with pre-configured demos and applications including map cache, tiny OWS, PyCSW and lots more. They also include the most recent versions of map script for PHP 7.4 and for Python 3.9. Also available on Windows are the Windows builds from the GIS internal sites maintained by Tamash Sekures. These are compiled versions of map server in GDAL for various Visual Studio releases and also there are Windows development kits available so you can easily compile map server yourself. Windows development kits are used for the CI on both GDAL and map server and have been recently updated to use Visual Studio 2019 and Proj7. Finally just a quick mention of one of the newer projects related to map server which is the Geo Styler map file parser and there's a talk at this Phosphor G which is well worth checking out. So the Geo Styler parser will allow you to take a styling available in the map server map file and convert it into another format such as LSD, QGIS or OpenLayers and it also works the other way so you can convert those styles into a map server map file format. Now we're going to look at how to get involved in the map server community. We've got a number of different communication channels including the mailing lists which are split into users and developers. So users is where you could ask questions and support on map server and the developers mailing list is more focused around feature development and code. We have an IRC so a chat channel which has recently moved to Deliverer Chat. We've got a Twitter account you can follow at mapserver underscore OSGeo and also a good place to ask questions is on GIS stack exchange and there are lots of questions tagged with the map server tag. We have a community gallery so this is a collection of sites that are based around map server and these are available on the map server wiki. So I've chosen the RISTA example of a disaster risk reduction knowledge service maps which was recently submitted so please feel free to check out some of the links including the one in this presentation. I'd like to highlight one of the community groups which was very busy during the pandemic so it's the Twin Cities Minnesota OSGeo chapter and they have a number of recorded talks that are available online so linked to via their page including one by Steve Lyme on the OGC API features development and also one by Bob Bask who I think organizes the talks on map server and Geomus. Finally in terms of community news there's been sad news in the last couple of years with the loss of Harvard who was heavily involved with the map server documentation and Olivier who was the tiny OWS developer who both passed away in the last year. We'd like to welcome Jerome Bué who is the map cache development lead to the project steering committee which is now up to 14 members and also linked to on this slide are the service providers so if you're looking for a map server support or development please click on that link and then there's a link to our sponsors also. Finally we're always looking for help on the map server project so probably the most useful things are to provide detailed bug reports, case studies including submissions to the community gallery we saw earlier, documentation fixes and updates so if you notice if there's anything wrong a submission to GitHub and a pull request is also always very welcome and with the upcoming 8.0 release to test the new release with your own systems and map files and report any bugs back to us. So in summary map server version 8 is coming in the next few weeks it's got lots of new features and improvements so please download, build, test and provide feedback on any of the beta or release candidates. There's a growing ecosystem of related projects to explore so remember to check out the GeoStyler map file parser and please get involved there's plenty to be done on the project. So thanks everyone for your time and I look forward to taking any of your questions. So thank you Seth for the presentation. I'm not sure if you want to share your screen but let's take some questions. Sure, yeah. Yeah just to point out when I was talking about the GeoStyler it was SLD it can convert to and not what I said so you can ignore that bit. Let's find, don't worry. So first question is what would be the best way to have a map to date map server suite on an Ubuntu Debian system, PPA or Docker or what else? Yeah so I guess if you want to get kind of the 8 release it's not being released yet so you probably have to build it yourself but if you're looking to get, install the last full release it's probably to use the apt-get which I think Angelos you might be involved in as well because you use it for OSG alive. So I think the commands here should get you 7.6.4 and as soon as 8's released they're very well maintained so you should be able to get the version 8 quite shortly after it's officially released. Yeah we're trying to keep up to date with the packages and the releases. So there is also a Docker I think Michael Smith he might be in the chat has definitely created a Docker how up to date and things I'm not 100% sure so there is definitely Docker a Docker image somewhere out there. Alright thanks next question is what are the plans regarding OTC API coverages and OTC API maps? Yeah so at the moment for the 8 release it's just been the OTC features API that's been implemented so that's that was Steve Lyme so the creator map server has been implementing the whole OTC features API. At the moment for the 8 release that's the only thing that will come out after that I guess it depends on who's available to code on it and the interests sponsorship funding for the others. In terms of development I think the nice thing about map server is it's fairly modular there's kind of 20 years of code in place so for the features API we already had the kind of WFS so the OTC features API makes use of probably 90% of code that's already there with another layer on top so for the coverages hopefully the kind of the whole way map server works with rasters is going to be similar in that obviously there's quite a lot of work to add it in but it's not kind of full rewrites and things of the whole application. The API maps I need to go and watch some other presentations to find out exactly what it is I think it's similar to the get map request so yeah as I say in the main branch at the moment it's just the features API that's going to be released in version 8 but then hopefully for future releases there'll be more OTC APIs being added in to map server. Thank you, next question is there an easy way to implement a known legend PNG or JPEG? So I don't know if this is to make custom legends there's definitely with the get legend requests for WMS you can get any legend that you have in your map files so if you have a layer set up with lots of classes you can definitely generate a legend just with an HTTP call otherwise you can probably I know that most of the front end frameworks request individual legends and then can stitch them together into a kind of table of contents then you can always resort to map script which you can do pretty much anything with you can get the image and then if you're using Python for example you could use map script to generate the legend from map server and then you could put it through other pipelines so you could put it through kind of filters and image editors to pretty much create whatever you need. So yeah if you had a template a Python template in JINJER or something you could populate it using using map script or any of the other mapscript languages so hopefully that answers the question. Thank you let's move to another question in map server 8 will it be possible to use SRS codes other than those by the EPSG other codes in the prod database for example? I think Evan might be in the chat he might be better placed to answer it but from what I understand with prod 6 and above you have the proddb which is a SQLite database so I'd imagine you should be able to add in records to that database if you got your own custom projections if they're not already in there and then in theory you can then point your map file to the custom projection. In map server you can still use the old way of adding in your own projection strings I think we'll still work in from version 8 although it's recommended to use the init equals EPSG and then the code. So yeah it's probably more of a prod question so yeah map server is built on GDAL and proddj and geos so there's lots of layers to map server it's very modular but you should be able to use custom projections. All right thank you so next question is there a support plan for vector tiles? I recall it being somewhat supported in a commit way back but I was unable to get it working. It's definitely there I think I presented it at the Bucharest Phosphor G so again it was Steve Lime I think Thomas Bonfor implemented the first pass and then it was completed by Steve Lime so that would have been in the 7.6 release so I've got it working on my own setup if there's issues and things then I guess a question to the user's mailing list unless there's a specific bug but yeah it's been there since the 7.6 release and obviously you need to still need to style the vector tiles on the client so you're not going to get a nice image coming back you're going to get kind of the vector tiles binary format but yeah it's definitely in map server and all the tests are still passing in the continuous integration so yeah any questions it's probably best to add to the mailing list or GIS stack exchange. Excellent thank you. One question and one more question here do you server now supports internalization in yet legend graphic what's the status in map server? Okay yeah I'm not 100% sure I know that there was a lot of work on I think it's I forget the C library but there's definitely international language supports is in map server so things like writing from right to left and things whether it's supported for legend graphic requests I'm not 100% sure but yeah map server is set up to work with internationalization so if it's not implemented then probably very very probably it could be implemented quite quickly. All right thanks I got another question from the chat which is a long one the question is better handling of environment variables is good would like to remove database connection in for map file to environment variable is it possible to set and based on environment for example disable debug log level in a production environment? Yeah at the moment I think there's about 15 environment variables because I went through them for the for the slide the connection isn't an environment variable it might be something you can replace with runtime substitution was probably disabled for security reasons so no at the moment there's yep there's no way of of using environment variables that I know of for connection strings what you can do is you can I think there's a utility program where you can encrypt your if you've got a username or password in a connection string you can encrypt it in your map file and then it's unencrypted by map server so in terms of security you can you can encrypt usernames and passwords I'm not 100% sure if you can encrypt the whole connections during itself but yeah it's not one of the supported environment variables at the moment. All right thank you very much for your answers we are out of time there are a couple more in the in the chat if you if you can answer them offline that would be great. Yeah I'll go back to the presentation and for being here. Great well thanks everyone for listening thanks Angelos for hosting. Thank you. So we are now moving to the next presentation which is news from Actinia by Marcos Netler and Carmen Tawalica I'm adding Carmen to the stream, hi Carmen. So a short introduction Carmen is a passionate geographer after her studies in geography she started working for Modialis using and creating all this geo software including an Actinia she's an open source devotee and was very happy to take part in the FOS4G 2016 organization team for Bonn, Marcos is co-founder of Modialis in Bonn, his main interests are Earth observation, geospatial analysis of massive amounts of data and development of free and open GIS especially grass GIS. So thanks for joining Carmen I'm going to play the video now and then we will be back for questions.
|
MapServer is an OSGeo project for publishing spatial data and interactive mapping applications to the web [1]. 2021 will see the 8.0 release of MapServer [2]. An overview of the performance boosts, security updates, code quality improvements, and new features such as an initial OGC API implementation, and PROJ 6+ support. An update will be given on the MapServer ecosystem - including new sites from the MapServer gallery, related projects such as MapCache [3], and the various distribution channels. Finally we'll look at how to become a part of the MapServer community and help with the continued success of the project. [1] https://mapserver.org/ [2] https://github.com/mapserver/mapserver/wiki/MapServer-8.0-Release-Plan [3] https://mapserver.org/mapcache/ The MapServer PSC will be working towards providing the MapServer user base with an annual update on project news and developments. Content covered in Abstract above. Authors and Affiliations – Girvin, Seth (1) MapServer PSC (2) Track – Software Topic – Software status / state of the art Level – 1 - Principiants. No required specific knowledge is needed.
|
10.5446/57289 (DOI)
|
All right. Next up we have Tim Bailey. Tim Bailey is an advisor at the Watershed Research and Training Center where he works on the intersection between forest hydrology, data science, and climate adaptation planning in Northwest California. Tim, thanks for speaking and take it away. Hi. Good morning, everyone. It's an honor to be in this program. Really have been, it's a phenomenal lineup. And also I want to say that Phosphor G has been an important place to find technology and I really appreciate the global community that's contributed. Okay. So my program is basically a focus on developing user capacity to use open source tools and rather than focusing on kind of the development environment, what we've been seeing as one of our problems has been lack of ability to apply these remarkable high quality data sets towards our natural resource challenges. So first, a little bit about myself. So I work for the Watershed Research and Training Center. I run two programs. One of them is I'm shared stewardship advisor for six national forests in Northern California. And so what I facilitate are cross boundary projects and projects funded through climate mitigation vehicles that are implemented on public lands and onto private lands as well. And then I run this program of the California Forest Lighter Analytics Collaborative, which is basically just trying to build capacity amongst the implementation partners for climate adaptation planning. So in California, we're really clear if climate emergency is upon us. We have had climate stressors that have been producing extreme wildfire behavior since at least 2012. And most of this century has been, we've been seeing trends of declining forest health. Some of these effects are not climate alone. There are also the aftermath of historical natural resource management. And in particular, there's two factors where the post harvest landscape in the post World War II period, much of California's commercial forests were liquidated. And there's been a challenge in managing both private and public lands. And then equally important is that there's been a, the suppression of wildfire has caused our forest to lack, basically wildfire suppression has exacerbated the wildfire problems that we're facing now. So California has also initiated a comprehensive forest climate policy and has provided significant resources to implementation partners to do something about the landscape challenges we face. In 2018, during the last administration, the Brown administration, there was a California forest carbon plan, and I think this is an interesting model that has, that could have parallels for implementers of red plus programs across the world. There are specifics to our circumstances in Mediterranean climate that is drying due to climate change. But as we look at climate disruption across the world, we are going to have to adapt forest management appropriately. Now one of the things that my shared stewardship program really works with is the fact that the organizations that are going to implement our climate solutions don't necessarily look like traditional natural resource agencies. And a significant part of that is based on human resources issues. You know, our, the traditional model of agency driven natural resource planning is not what is leading climate resilience adaptation at this point here. And so with that, one of the things that we're seeing a significant break up in is all of the traditional natural resource agencies have established enterprise geospatial resources. And this transition to these new entities is really an opportunity to use, you know, significantly different geospatial infrastructure. One of my colleagues has wrote this assessment of capacity needs for these non-traditional resource resilience implementers. And you know, we're looking at somewhere in the range of 250 organizations in the state that are applying or intend to apply climate resilience strategies for forest landscapes. Now so the California Forest Lighter and Analytics Collaborative was funded through an award by the Bay Area Council's 2020 California Resilience Challenge. And what we were realizing is that in the commercial forestry sector, LiDAR is an obvious tool to do landscape-wide planning. But in these public planning efforts, it really, you know, we weren't, we were seeing a lot of challenges in applying it. On the agency side, they don't necessarily have the human resources to adopt the technology. And then in these third-party implementers, you know, they're often very, they have limited budgets, they have limited resource budgets. And they're basically building teams to complete individual projects. So where we kind of went, got to is, you know, we want to empower the communities that are putting change on the ground. We want to give them the best tools possible. And frankly, there's the open-source geospatial toolboxes are essentially the best-aligned tools available. Now one of the important principles for us is the co-production of data with our partners. And I think that one of the drivers of the enterprise geospatial vision has been looking at this as essentially a command economy, where you concentrate the information resources into an agency that dispenses resources to do the job. And where we've really looked at is how do we build the capacity from the ground up. And really we want to make it the foresters who are putting in prescriptions on the small levels or even small private landowners who are potentially planning forestry interventions on allotments as small as a hectare. You know, they can receive real value from using point cloud data. So in the United States, there's a federal program, 3DEP, which is currently has something like 31 trillion points. It's a phenomenal public program. It requires identifying funders to match the federal investment. And California has had a real challenge organizing the funding for this. Currently 80% of the United States is covered in the 3DEP program. And California is significantly behind. One interesting topic internationally is that there are several efforts to develop lighter programs globally. And I've had interactions with Earth Archive who are planning sensitive work in South America and they're likely to use similar approaches as for posting their data sets. Now in 2020, as I was developing this proposal, the governor had proposed this, you know, completing California lighter acquisition, the legislature rejected it both because it came about at the beginning of COVID, but also they had a really significant critique of not having established a essentially data management plan in a data life cycle analysis. And this has turned into somewhat of a touchstone for me because, you know, essentially what we want to do is we want to validate the public investments by using it and we want to improve the performance of all of our partners by, you know, giving them the best available technology and improve their forestry outcomes. So why LiDAR? LiDAR is unique and it's generally tasked with the exception of Jedi, which is the space station based LiDAR. But I mostly concentrate on airborne LiDAR that in the United States, the economy of scale is approximately 1500 square kilometers at a time is the optimal collection size. Basically, it provides a snapshot of forest canopy structure at a given time. And many of our, many of our climate resilience strategies are really going to be based on structure and we also need, you know, systematic review and quality control for public programs and we also need to create, you know, a transparent environmental compliance pathways so that are, so that the, you know, the public both has confidence in the expenditures for these climate resilience plans, but then also that we're engaging with critics and we're building the best possible plans. And we think that having a, the LiDAR facilitates a kind of landscape numeracy that is unique. And especially with our specific challenges here are not so much the conversion of forest to other land uses, although that is an issue. But basically the suppression of wildfire over the last 100 years or so has created, has basically encouraged more dangerous forest conditions. And so structural modifications of the forest are our pathway towards resilience and one of the best, most effective ways to achieve that is with prescribed fire. And often we need to do mechanical thinning in advance in order to implement prescribed fire in a safe manner. So we have significant issues as barriers of adoption, similar to what Rob articulated in his initial talk, you know, that these LiDAR datasets are really big datasets and the optimal strategy is to do the computational work in the cloud. I identified with virtually everything Rob said about still downloading data myself, even though I really encourage users to use cloud resources to the extent that's possible. So we also have these issues with non-governmental organizations have funding cycle issues that are limited or that are focused on implementation of specific projects. And for us, one of the things is that we are working under basically a 10-year climate resilience plan. We really, in California, there's a lot of our policy infrastructure is really focused on what we're going to get to by 2025 and then 2030. And so, you know, a lot of the big data problems are fundamentally, you know, are we training, you know, people to engage? And so, you know, I see this really as, you know, we want to develop the capacity to use the cloud resources over a long timeframe. And, you know, hopefully there's also like a career, like kind of a positive feedback loop where we have, you know, engage with people to improve their work product at this point and then they develop over, you know, time to meet the needs of society. So we follow a lot of complementary technologies that are phenomenal. I think that there's remarkable maturity at this point in the spectrum of sensing side. And then there's other, you know, the radar and like, Jedi applications and to some degree, the UAV applications are still emerging. And I am absolutely interested in them. But for our uses, we've felt that because so many of our use cases are fundamentally about three-dimensional structural modification, that was where we were focusing for this program. But, you know, I could, I follow all of these topics. So our goal is to accelerate adoption of the technology in a manner that is improving the outcomes. And so, you know, these are the strategies that we came up with as hosted resources, sharing our tools and really trying to train best practices. And, you know, with this model, I think that we're, virtually any of our technologies that we're trying to bridge from the kind of developer and the research community to like the applied science side, we really want to create kind of the social environment that supports the applications. So why is FOS such a big deal for this is one, you know, stable in continuous progress. And one of the places that I really want to emphasize is to think about the non-economic forests that are not industrial forests, you know, they don't necessarily have the kind of stable funding. And so the idea of users being able to maintain control of the data sets that they're using to plan their forest is something that's really valuable. And the other thing, transparency, you know, we want the, this, natural resources are inherently political and we want to have transparent process and give, you know, people access to the decision making process. And particularly, we want to make our decision processes clear to our successors. And that's particularly important in the climate mitigation adaptation space because, you know, we're looking at these with the carbon markets, we're looking at this over, you know, 100 year time frames. Okay, so this program was really inspired by a lot of programs that, you know, this is certainly not, I feel like, is explicitly not cutting edge. And so the end-tobiography started, I think, at least 13 years ago, basically distributing NSF funded, or National Science Foundation funded, LiDAR and basically building derivatives for people in a manner that really facilitated the growth of use of LiDAR in the Earth Sciences. And it was a critically important place where skills were built. More recently, NEON, the National Ecological Observatory Network, has done a phenomenal job building long-term remote sensing pipelines for their sites and their, you know, an amazing reference implementation of an open science project that's designed to last decades. We partner with CYBERS on high-performance computing access, and they've been phenomenal. They have regular trainings. And Carpentries and Jupiter are providing really important facilitation of open science as well. OSGO, of course, for, you know, personally speaking, the OSGO has hosted a tremendous amount of learning as we are all at an OSGO event now. Now so the tool stack that we're using for our applications, these are most of what, you know, the LiDAR and in the R environment has been really useful. Much of our raster work ends up being done on grass. We use FES, which is a Forest Service, the US Forest Service develops the software and it's, we have kind of policy reasons to use FES for modeling carbon. Pdol, Howard Butler's team's products are phenomenal and Jupiter. So what we've kind of identified as our user needs is there's data plumbing issues, which you know, really limit a lot of users. So you know, just basically going through where you can get data and what you can do with it and you know, is really important. And then basically just developing reference implementations and you know, helping users to, you know, implement best practices is really important. And then, you know, policy representation has actually turned out to be a major role because as I've mentioned, we've had a difficulty organizing funding for large scale datasets and that we're engaged in that right now. It's developing proposals for wide area LiDAR acquisitions. So we have very specific user like a business case because we're working within this climate resilience space where we have specific demands on developing carbon budgets for forestry projects because you know, they're, they're tied to climate mitigation funding. So generally speaking, we're trying to plan climate adaptation in parallel to climate mitigation. And in most cases, you know, we find climate adaptation is also as adaptation and mitigation tend to go together. Okay. So we, what we've, we've done is we've had, we've basically developed relationships with NGOs, local districts, other implementation partners and really engage them on what their issues are and then help them to develop products that they need. So this is like a most basic, this was done in grass. This is basically the simplest applications that capture forest canopy issues. Canopy Hyatt models, you know, topographic analysis and then segmentation to individual tree. And so this is, this is kind of a start down the road, but where we really want to emphasize the techniques are, you know, more fuller point cloud utilization. But this is basically within the reach of a lot of our users fairly quickly. So this is another project. This is an example where we were basically brought this, this is a regional prioritization from the, from the transportation agency for a high, you know, how do, how do we develop fire resilience around a highway? And this is an iconic highway, the highway one in California has significant like landscape architecture values. And in this case, Caltrans has this regional prioritization. And then this organization, the road with forest foundation, which is a nonprofit that essentially acquired a bankrupt timber company about 15 years ago and is doing a variety of restoration work on it. They developed a proposal for a fuels project, a shaded fuel break on a ridge top that would improve the fire resilience of the region. So we started with this cartoon of where Caltrans has identified this is their basic concept for how we achieve fire resilience around highway. And then we were given this, this factor coverage of proposed treatments. And then, you know, basically we took the, the lidar and went to single, you know, normalized point cloud, single tree segmentation, and then, you know, basically did distributions of what trees were in each of these, these units. And then, you know, with the lidar, we can also identify the use of access. So with this process, we significantly lowered the risk. Is it getting close to time? Yeah, two minutes. That would be great. Thank you. Okay. So we have a lot of different use cases, power lines, a lot of ignitions with power lines, same processes. So we've been working with small private landowners who are basically don't have, don't make income from their forests and want to engage in forest improvement projects. And these are, these are really important users. So we've, this is not necessarily a very good example of sustainable forestry, but this is a hydrologic experiment where this is the maximum legal harvest in a hydrologic experiment in California. And this is what we're advocating for post-treatment surveillance is, you know, UAV-based acquisitions. We do a lot of work with geomorphic grade line assessments, which are repairing restoration work where, and this was done in grass. We work in oak woodlands. This conifer encroachment issue is a major climate hazard. And legacy forest elements is a huge use of the latter datasets. This is one of our standard approaches where, you know, removing ladder fuels and, you know, reducing the densities, the stand density is really important. We have a huge issue with post-fire treatments. So this is a standard product. The DNBR is the normalized burn ratio, and this is a soil burn severity map made out of this. And, you know, basically we do, we use LiDAR to do, you know, geotechnical work in the post-fire environment, assess pre-fire canopy and change general geotechnical material. So in summary, you know, findable, accessible, interoperable, and in reuse, fair data practices are, you know, critical. We think that providing a venue for long-term data management is really important. And I will haunt the map at lunch if anyone wants to reach me. And I wanted to acknowledge thanks, Rob, for hosting this, and my funders, Bay Area Council key, as well as our partners, Humboldt County Resource Conservation District. And, you know, and also the folks I worked with this summer have been phenomenal and, you know, all the developers that make this possible, frankly. Awesome. Thanks, Tim. Yeah, we have one question. You might have covered this, but how often do you fly airborne LiDAR surveys, I guess, due to wildfire incidents in California that canopy can change quickly, right? Yeah, that's a real problem. Okay, so the standard for what USGS will ostensibly co-fund is an eight-year return interval at QL2, which is, I think it's two points per square meter of bean pulses. What we're advocating for, but this is not a reality yet, is five-year QL1. And we have a proposal in right now, knock on wood, for the first, well, an early USGS re-survey of eight-year-old data. So in order to qualify for USGS co-funding, it's, until they finish the whole country, they're, what they are after is eight-year return interval. So many counties are now flying LiDAR on their own. So it's a...
|
Supporting forest climate adaptation planning with point cloud analytics for nontraditional geospatial users In fire adapted forests, three dimensional structural characteristics are critical factors in drought and fire resilience. Public point cloud repositories such as the USGS 3DEP program, Opentopography.org, and in the future the Earth Archive, are critical data infrastructure for climate adaptation planning. As we collectively face escalating climate hazards, the value in deploying operational geospatial tools for managing risk grows. Historical forest management in the Western United States has resulted in declining resilience. To stabilize above ground carbon pools and secure critical ecosystem services, the State of California and the US Forest Service have committed to treating 400,000 hectares of forest per year through 2030. These typically involve strategic mechanical thinning coupled with application of beneficial low to moderate intensity fire. These programs are often planned and implemented through shared stewardship agreements between government agencies and implementation partners from nongovernmental organizations, First Nations governments, and local jurisdictions. Large government land managers and industrial forest operators have typically used commercial enterprise geospatial software for data driven decision support. Software licensing restrictions create arbitrary barriers between partners. The proliferation of nontraditional climate planners and the need for long term reproducible data science products have created an important opportunity for the FOSS4G community. The California Forest Lidar Collaborative has been providing technical support and training to a diverse user community to apply point cloud analytics to a broad range of forestry problems. This program has facilitated the adoption of data science pipelines using Python, R, PDAL, GDAL, GRASS, and QGIS. This results in transparent environmental compliance processes, democratization of climate adaptation planning, reproducible forest data science workflows and increased diffusion of best-in-class geospatial tools. In fire adapted forests, three dimensional structural characteristics are critical factors in drought and fire resilience. Public point cloud repositories such as the USGS 3DEP program, Opentopography.org, and in the future the Earth Archive, are critical data infrastructure for climate adaptation planning. As we collectively face escalating climate hazards, the value in deploying operational geospatial tools for managing risk grows. Historical forest management in the Western United States has resulted in declining resilience. To stabilize above ground carbon pools and secure critical ecosystem services, the State of California and the US Forest Service have committed to treating 400,000 hectares of forest per year through 2030. These typically involve strategic mechanical thinning coupled with application of beneficial low to moderate intensity fire. These programs are often planned and implemented through shared stewardship agreements between government agencies and implementation partners from nongovernmental organizations, First Nations governments, and local jurisdictions. Large government land managers and industrial forest operators have typically used commercial enterprise geospatial software for data driven decision support. Software licensing restrictions create arbitrary barriers between partners. The proliferation of nontraditional climate planners and the need for long term reproducible data science products have created an important opportunity for the FOSS4G community. The California Forest Lidar Collaborative has been providing technical support and training to a diverse user community to apply point cloud analytics to a broad range of forestry problems. This program has facilitated the adoption of data science pipelines using Python, R, PDAL, GDAL, GRASS, and QGIS. This results in transparent environmental compliance processes, democratization of climate adaptation planning, reproducible forest data science workflows and increased diffusion of best-in-class geospatial tools. Authors and Affiliations – Tim Bailey Watershed Research and Training Center Track – Use cases & applications Topic – FOSS4G for Sustainable Development Goals (SDG) Level – 1 - Principiants. No required specific knowledge is needed. Language of the Presentation – English
|
10.5446/57290 (DOI)
|
Hello, so welcome to the next talk. It's a talk from Arnov Crystal and he should be here now and joining us live, but unfortunately he can't join. He's in the train. I'll show him for a minute, but the internet is not so good there. So we have his talk recorded and will broadcast it soon. And Arnov will be in the chat so you can meet him in the conference room and ask questions there. So his talk is the Borg supply change and it's about the telecom and how the telecom brings force projects into stable production. Some words about Arnov before we start. So most of you might know Arnov Crystal. He was one of the founders of OSGEO in 2006. He is an OSGEO advocate and co-founder of OSGEO. He's working at Terrestris at the moment and he is a senior consultant there for telecom AG. He is working and on the other side he has metaspatial, the platform to coordinate smart mapping and he is a scrum master also. So welcome to his talk and hope to see Arnov in the chat. Hello, my name is Arnov Crystal and I should be in Buenos Aires now, but unfortunately it's not possible. So I wanted to do this live but again this is not possible because they just cancelled my night train. So I have to take an earlier train and I'm not really sure they will work over the internet connection that we get on the train. So better record this. Sanctinia posters, Geo server, Shogun, open layers and many more goodies in the cloud. It's a talk by Arnov Crystal also known as 7. What are my roots? I am a carpenter and former director of president of OSGEO, entrepreneur and Buddhist scholar. I am Borg, so beware. My affiliations I'm working for Terrestris, GMBH and CoKG and metaspatial.com which is a consulting or agile product management. They are the Raster guys, Terrestris is the Vector guys, they do the web viewing and the operation and obviously Borg Inc. So if you have an issue with artificial intelligence please come to me and I will help. I wanted to do this live but again this is not possible because they just cancelled my night train. So I have to take an earlier train. So I summary this this year and wait for my second business like the also oflar, which My affiliations, I'm working for Terrestris, GMBH and CoKG and Metaspatial.com, which is a consulting for Agile product management in Mundialis. They are the Raster guys, Terrestris is the Director guys, they do the web viewing and Mundialis the operation and obviously Bork Inc. So if you have an issue with artificial intelligence, please come to me and I will help you. So we have nine points in an open source tool chain and we'll look at them. So first question is what do we want to do? Next, where are the pain points in the current way with doing things? And we design a new process, I will show in a prototyping demo what is going on. Then we find the right open source tools to make it better, decide on an architecture, putting the pieces together, results in a demo again and seven of nine just because I had nine points. So I thought I should add this ninth point. What do we want to do? We want to roll out fiber optics in all of Germany because Germany is too slow. We have no fiber optics due to political strangeness in the late 80s and 90s. So everything we have is copper, which is nice but slow. And therefore we really have to speed up the planning process, otherwise we're going to be lost. How to do that? What are the tasks? What do you need to do to roll out fiber optics? First you need to collect data and then you need to analyze and process the data and use the data to position the distributor boxes, which need to have a certain distance from each other, then find potential trenches where you can actually put the fiber optics and generate permit application forms because only then you can start to dig and actually put the fiber optics into the ground. And all of this planning process should be automated. Where are the current pain points? It currently is an on-site manual planning, so that means people actually have to walk to a place and take pictures of potential positions for collector boxes. And there is limited scanning options because you need so many people to drive around and do this that it's just not possible with the current people we have. There is still a lack of data because Germany is not fully open data and we have legacy systems, as you can imagine, and a big TataCom provider in the basement. There is really old stuff running there. We have waterfall project management and maybe we need a new mindset. This will be a little excursion. I'll go into the manifesto for agile software development, which they have decided is the right way to go forward. And what we do here is we put individuals over interactions, pardon me, individuals and interactions over processes and tools. And we need comprehensive documentation, but we think that working software is more important. We want customer collaboration over contract negotiation and we want to respond to change instead of strictly following a plan. That is, while there is value on the items on the right, we value the items on the left more. And as you can imagine, this really is a new mindset and for a big old company with many, many people and lots of hierarchies and bosses and stuff, it's not easy to implement. But I think we succeeded well. So let's go to the prototyping and have a look at what we have. And this will be a kind of live demo. So I'll switch to post-cubius. And we'll see what you always see when you start with GIS project in the wild. It's an open street map WMS. And just to show this is Hamburg, a few data that we use for input, we obviously need the buildings we want to put the glass fiber. And we need to have the landlord and land parcels. And then we need to know who owns which part. So this is probably public and this is probably private and all the other ones, we're not quite sure. So there's some risk in doing that. But that's all the data that German data is currently showing us. So this is part of the basic data that we need. And then we have more data, which is a lot more detailed. That's this data. And this has been collected by a surface car, which actually actually runs on the ground. And we've seen this before. And these are objects that have been detected by KNN, which is the Kunstgesnarl-Wonelles-Netzwerk, which is an artificial neural network. And this is also detected different surface types. So we have objects and we have surface types. And this is great. And all the planning areas that we are going to use have been traveled with this surface car. But it's not enough because as you can see in the back here, no car can drive there. So we have no data. So additionally, we have added orthophoto classification, which complements the KNN data here. And you can see there is some overlap in the data, but there's also some differences. So obviously, we also have orthophoto data. Let's take out the other data so that we can see it. Here we go. And if we go into a detailed situation like this, then we can see that the KNN data is quite detailed here. We can see this is asphalt and this is concrete data. And these are the, let's take out the digital orthophotos again. These are the floors. And what we need is the trenches. And the trenches have to have a certain distance from the outside of the places in connection to the, where are they? You can see this is really a slow process. If you want to do it in an animated, this one here, this is the road area. And the road area from the road area here, you have to measure. And this is going to be, where is my measure tool? I lost it. There it is. This needs to be around 40 centimeters. So from here to there, it's around 70 centimeters. And that's the distance which needs to be taken from the outside of the building. And the building is going to be here. Can put building here. So it's a conflation of building data and of cadastral data and of road data. And all of on top of this, you can now see in the orthophoto how it all comes together. Come on, orthophoto. There you are. Okay. So let's go back to the presentation. And after the prototyping, we know what we're doing. We're trying to position those collector boxes. And we're trying to find out which is the best place to put trenching. We need to design the new process. Because what we do in the trenching is we collect data and collect even more data and collect even more data and then conflate this and process this and calculate and automate everything. What do we have to calculate? We have to calculate what the cost of these different lines is going to be. So if we look at the different costs here, and you can see that depending on the surface, it's more expensive or less expensive to dig a road there, to dig a trench for a fiber cable there. So this information is basically deciding then which path you're going to take when you dig your new trenches. Okay, data, collecting data, open data in the states of Germany is a little not so nice because there are some of them is open, some of them is partly closed, some of them is really closed. And so it's a little difficult to get an overview. In places where we can't get any data or we don't get any open data, we obviously go to take some open street lab data. And where we can't get those for the processing of the surfaces, we also need orthophotos. And then we have this surface car which drives around and has cameras here for panorama pictures which we'll see later. And also a lighter scanner at the back and a high precision GPS so that we will know where we are. Okay, now we have the data and we need the tools. What tools do we need? I think this is the easy one. CuriosityChef's Zine, it's great for prototyping ETL, GDR, GDR or GR and all other tools. And yes, we have some FME in there. There is Grasse and Dictinia, there was an Dictinia workshop in the beginning of this conference. Provisioning happens with G-server and GeoNetwork and viewing and planning is done with Shogun fiber maps which is based on open layers and operating the whole thing in CI-CD with Kubernetes, monitoring Kibana, Grafana and all the other stuff you get in the cloud. So let's look at the spatial data infrastructure and there is some OSM data coming in here with the OSM importer into a post-gis database. This is like our core vector database. That's conditional server data, compared with post-nase or with FME to go into the post-gis database and is provided with GeoServer as a Geospatial service. The metadata is collected by GeoNetwork open source and metadata and later on fiber maps will use in the web client this metadata to actually access this service to get to this data. So it's all connected. If you look at it at the processing perspective you have T-surface data which comes in through the Kunstlis-Noronautis-Netzwerk which is the artificial neural network which will produce outcome data which is resulting in the classified surface cover and the same for all the photos which are calculated with GeoCutinia which leverages grass modules to actually do this and all of this if it is a vector result also ends up in the post-gis database. Next is the planning and distribution boxes and potential trenches and this happens in fiber maps which is a web client application which accesses these Geospatial services which take the data coming out of here and actually also can remote control actinia which can then pull new T-Food surface data through KNN, FME classified surface cover put in the results here so that fiber maps can continue to collect this because we don't calculate all of Germany all the time as you've seen it's very detailed data that would probably take half a year on very big machines and then the data is already outdated. So this is an on-demand processing change. Okay designing the architecture the OOTC telecom cloud was a given no Azure no AWS it's OATN telecom cloud and we had a red-edged open shift which is currently transitioning back to Kubernetes we have a GitLab and Ansible and Puppet and we provisioned the whole thing with Terraform which has been replaced by Steep recently the whole thing is deployed with Helm and if we put the things together it looks like this we have a setup in workbench in GitLab and open telecom cloud with open stack then we provisioned with Terraform lots of virtual machines in these we have red-edged open shift with master infrastructure nodes and bastion IDMSFTP proxy and fdb and duty b these are configured with Ansible and Puppet and then we deploy them with Helm so that we get Docker images and the whole thing is CICD pipeline where devs push stuff into the app repository which is compiled unit tested image build pushed image into the magenta trusted registry and on the other side the ops push the prod and staging SDI which comes from the pulled images into dev and staging a prod because deploy diff and lint so the whole thing is running with a workbench in GitLab it's really a nice setup but it took about two years and I don't know how many years of work in person days to actually get this up and running so fiber optics rollout planning on speed and this is going to be a small live demo and after that I should be so this is how it looks like when you plan these things and all of these little triangles and green dots and gray dots are collector boxes and we're going to now move these collector boxes and this is the T surface data which has been collected with lighter data and panorama images and we'll have a look at one of those panorama images here so we start the standout sicherel which is the place where you can put the collector boxes and you can see that there is some images so we want to have a look at the image in the real world so what we do is we move over to fiber d3d this is the position down there where this one is we're looking at and when we switch to fiber d3d this is a different web application it's fired up now and it loads the point cloud and panorama images and positions this object in the middle of it so this object is actually a vector data image which is projected into the panorama images at the right point this is it you can move around in the image and we can also there you can see the cone of the direction where we're looking at and if we zoom into the picture then we will also see now we're zooming into the picture that the cone is adjusting its size and direction to show where we're actually currently looking at so we always have the orientation in the map if we move to a different position we can just click there and we'll see a different view of the next road crossing that we are here and this is it and I'll stop here for a moment because now you can see the t-surface card this is the t-surface card which takes the data with the lighter scanner and the cameras taking the pictures and for a moment you will see now the point cloud there it is and then all the pictures which are stitched on top so that you have a nice 3d view now we will go back to the box that we have just looked at and we will move the box to a new position it's wrongly positioned in the garage so we will take it out and put it inside outside of the garage again there we go I'm sorry that this is a recording but I was afraid that it wouldn't work for presentations not nice so once we've positioned it there we can go to the next step and store this information if we were happy with the position now we can take pictures this is like walking there and actually taking a camera and taking a picture of the place where we want to put the box but the box is already there which is pretty cool and for the permit we need different perspectives so we move a few meters onwards and then turn around and look better at the box and take the next picture and again it's like positioning we can move it we can even zoom in or out whatever we need to get the right perspective for the permit which we will let it produce here you can now see the old and the new position and from where it comes we want to save these pictures so now we're processing the pictures and putting them back in and here they show up so this is our processing interface and now we want to make a report so we select this collector box to produce a report and this report is the PDF which gets sent to the public administration which will then evaluate whether the position of this box is allowed at the current phase where it is shown so once you get the PDF you can open it up and you see here that you have a number the postal code the name of the address and this is the box there is quite a few different sizes standard sizes of boxes that you can position and you have an overview map and this is the two pictures where you can see the before and after and before and after of this position and this is what actually is enough for the community to allow whether to put this box in the corresponding place or not so let's fast forward to another perspective this is now looking at another place in where you don't have positioning it's just you can you go any place that you've traveled it's like google street view it's not much of a difference i think the quality is much more detailed obviously so then let's have a look at the planning process itself so this is the planning process and if we go further zoom further in and we can see the potential trenches in action so this is the trenches this is the lines that we need and this is the connection right into the building and these one out here they have been calculated on the basis of this other data that we've seen before so that was a long talk and i'm coming to an end this is the team currently it's changing every now and then because we're in an agile process there's terrestris mondialis metaspatia funhofer geomare camp2cam this is all open source guys and there is contera that's an esri company which is okay we can work with them together so it's a conflation of different technologies thanks a lot to the team and thanks a lot for your attention and i hope you can enjoy phosphor g 2021 see you next time bye bye okay so thanks a lot anul for your great presentation showing the platform and the challenges that going on here big applause for anul please and as i mentioned before he cannot be here so you can meet him in the chat and if you have questions you're welcome to ask questions there so one question was posted in the questions section i forwarded it to the chat and have a safe journey anul and hope to see you in person at the next conference and all the best and we are looking forward to the next presentation in a minute so stay in our room to hear about traffic in Buenos Aires shortly
|
This talk focuses on aspects of transitioning Open Source software projects into productive environments. The Deutsche Telekom AG has set out to use FOSS software to build a comprehensive geospatial data management and processing environment based on cloud technology. Some components (like PostGIS and QGIS) are used as COTS (commercial off the shelf) products. Others (like GRASS GIS) are used as libraries to implement intricate parts of an incredibly specific process to dig optimized trenches for fiber optics cables throughout Germany. The project uses agile methods to implement this architecture with FOSS products and projects and hand crafted implementations to achieve it's objectives. If we use the analogy of a bridge across a deep valley to achieve the objectives, then it feels like going full speed on a downhill bike, jumping into thin air and reaching the other side of the valley in a truck carrying internet access for millions landing on a concrete bridge that has manifested halfway through. A bit frightening, but so cool! That's FOSS! Before going into implementation details we have to clarify some terminology: In the physical world, a project is planned for a specified time with a specified budget and clearly defined objectives. Think: building a bridge. It will be unique because it spans across a specific valley and has to account for a specific geology and specific use (people, donkeys, cars, trucks, trains or electrons). Building the bridge will require a plan, excavators, trucks, steel, concrete, asphalt, cables and so on. When the project is "done" people, goods and electrons can cross the bridge: Even donkeys can use it. Another example for the same terminology in a different context is the process of making a new car. This will start with a project focused on achieving the objective of making a new car. Once the car has become a reality, the project ends. The product itself gets reproduced (and sold) as often as possible to allow people to drive across the bridge which was the end product of another project. In the FOSS realm things are a Megabit different. A FOSS software project is an ongoing effort to create software, typically by a diverse and sustainable developer community. Some projects have been around for decades (think GRASS GIS). They are never "done". Others stick around for a few years while they are needed and then either get outdated or replaced by a contender (see Community MapBuilder a good decade ago). In general, when it gets started, there is no predefined end to a software project. Sometimes the founder of a software don't even know that they are founding a software (think Gary Sherman writing the first lines of what today is better known as QGIS). Sometimes Open Source software projects become part of a product. This typically happens in a downstream effort by a completely different set of actors (think Google's Kubernetes turning into the core of RedHat's Openshift). When it comes to combining all of these realms within a project aiming at achieving an objective by using FOSS software projects – things can really become a bit confusing. Hopefully we are going to be able to clarify a few things as we go through our presentation(s). Authors and Affiliations – Arnulf Christl, Metaspatial Senior Consultant, terrestris GmbH & Co. Kg., Germany Track – Use cases & applications Topic – Business powered by FOSS4G Level – 2 - Basic. General basic knowledge is required. Language of the Presentation – English
|
10.5446/57291 (DOI)
|
Hello everyone, we are back. I'm adding Christian and Mark to the stage. We are now going to hear the next presentation which is the state of GeoExt along with an outlook on its future by Mark Jensen and Christian Meyer. Mark is a developer for Open Layers and GeoExt, a conceptual architect for React, Geo, Geostyler and more. He is a speaker and a conductor of workshops, he is also a GeoTarter member and a general manager of terrestries and Mundialis. Christian is an engineer for Geoinformatics and Surveying. He is an open source Geospatial enthusiast, founder and CEO of Maximum. He is also a GeoTarter member and working on GeoExt, WEG, Geostyler and Open Layers. So thank you guys for joining. I'm going to start the video in a while and we are going to come back for questions after the video. Enjoy. Hello and everyone, welcome to our presentation, the state of GeoExt along with an outlook on its future. So let us briefly introduce ourselves. My name is Chris, I'm the CEO and founder of Maximum in Germany and here with me by a remote connection is Mark from terrestries. Hi, Mark. Hi, Chris. Thanks for introducing me. So I'm also from Germany, the general manager of the two companies terrestries and Mundialis. And I think, Chris, you want to maybe say something about Seth. Yeah, and also, Seth Gerwin helped us to prepare this slide. He is Geospatial developer with Compass Informatics in Ireland. We are all working together on the GeoExt project, we are core developers and members of the projects during the community of the GeoExt project. And we share the love for Geo and open source software in general. And fun fact, we only met in person, all three of us for one time. And therefore we are working together very successfully, let's say for the last couple of years. And if you might have noticed, our abbreviated names are all 10 characters long, also fun fact for the nerds under us. Yes, back to the talk, the topic. This talk will introduce you to the GeoExt project, and we'll have a little focus on the recent changes of the last two years since the last phosphor G has taken place. We will try to give an outlook what will come in the project, and it's merely intended for people without too much previous know how of the GeoExt project. So if you are an expert, you might listen, but you might have that news than the other guys. So maybe, oh, so, yeah, sorry, we have to synchronize a bit with the scrolling of the slides, but don't worry. But we do not cancel the recording now. And yeah, so what you might have know is, what is GeoExt? GeoExt is an open source project which enables the building desktop like GIS application through the web. And it combines the GIS functionality of open layers with the user interface savvy of the EXT JavaScript library provided by Censher. So the two base libraries, open layers, the first one is a high performance feature pack library for all your mapping needs. And it's often called the Swiss Army knife for JavaScript mapping. It's an OSG project under VSD license, and it supports many data types, layer types, custom controls and interaction. All you need to build your fancy slipping, slipping maps. EXTJS is a JavaScript framework, one of the most comprehensive frameworks for building data intensive cross-platform web and mobile application. And one of the biggest advantages is that it includes a UI bundle which has 115 or 120 UI components to build feature-rich web applications, for example, grid charts, layouts and all that stuff. So GeoExt itself has some kind of dual licensing. We have this GPL version which is hosted on GitHub. And then if you buy a commercial license of EXTJS, you can use GeoExt under the VSD license so that GeoExt won't inject your commercial project. So for if you have any questions about that, we have an FAQ page and also Sancho as the host or the maintainer of the EXT library has also an FAQ page which have really interesting information about that. And this is how GeoExt could look like in general. So what you see on the right side, we have an open layers map which slightly integrates into a basic layout. And on the left side we have an EXTJS grid which combines the features you see on the map. Also listing the attributes of the listed features and you can do some pagination and filtering and all that stuff. And as I said, it combines the savvy of the EXTJS and the mapping needs of open layers. And here for completeness, there were several other reports on previous fall for G conferences, which are here as a kind of a link list where you can look up later on if you see that slide after the conference. So there was some project news, what happened in the last two years. So we are now an OSGEO community project. So since fall of 2019 under the lead of CESC Irving, we were granted as an OSGEO community project, which was a huge step to getting this acceptance and this batch. EXTJS news, all that say some sort of. So CENCHA was acquired by IDERA in 2017, which isn't really a hot news anymore, but somehow brought some some unsharness what happens to the project and we can say okay. In the meantime, there was a release of a new major version of EXTJS. It's version seven, the current last commercial version is seven four, the same then the community edition and the last release GPL version is version seven. There are some sites where you can pick up the latest versions, which is not always that easy. And there are no concrete plans for version eight, which was released in the roadmap update of CENCHA, but we are sure that things are happening under the hood in the meantime. Open layers news, there is always happening a continuous development on a really, really high quality level so open layers is getting better and better due to its community members. So the current version is 661. And there's also a talk on this phosphor D conference, Open layers feature friends by Andreas Hockeva and Tim Schaub. The current version is version four dot oh dot oh, which was a major step. And then at the last phosphor G in 2019, we were stuck at version two dot three. So version four. So the biggest change is the integration of open layers in version six, which lasted very long and was the reason to get in semantic versioning for GXT as well so there are no breaking changes of GXT between three and four. So it's such a huge step from open layers four to open layer six. Yeah, we, we agreed to name it version four. And therefore we had to get rid of the of the three in the project name. So it's now we renamed the repository from Dx t three to do x t and just a little hint it's no good idea to put a version name in your project name. So we're going to do that now. So there are any changes regarding to do open layers, be sure to to get a hit on the release notes which are linked here. So the new features there is a there is some kind of feature selection model has been introduced in version four which allows to automatically synchronize a grid row selection with a selection on the map and vice versa. Printing stuff with with map fish was improved so we can now print labels with offset, and we can now restrict the vector features for printing to the map extend. Also, what was introduced in the last years was a huge support for WFS, and this was continuously improved. So we now support remote remote sorting, and we have a few implementation specifics and better when they're independent for WFS support. Also, we had a focus on OTC filtering we now support spatial filters, and we made the existing non spatial filters more robust. For example, we have now very stable combination methods for the filters. And what you always don't see and cannot present very well is the work we did under the hood so we had a lot of bug fixes and hardening. We introduced more conflict options. We moved the continuous integration from Travis to get some actions, and we improve the test coverage. So mark. Yes, thank you Chris for giving us a rush through what GXT is and how it's created and what other base libraries. So, the next part we want to look at in this presentation is the ecosystem around GXT, because it's important to not only have a look at GXT to build your applications but probably there's something that you can pick from these talks around the ecosystem of GXT. Okay, so in case you want to work with GXT, you use MPM to install it. That's the recommendation of the developer team. This has changed so GXT is quite an old project now I don't remember when the first version was out but it's a bit older now and we tried to, of course, use the modern ways or adopt to the modern ways of installing or resolving dependencies. This is not actually new. So we also were able to do this in 2019 when CESC gave the last presentation on GXT. But I still see some customers using GXT in a non optimal way. So if you are one of those, please try to use it like this MPM install GXT at GXT slash GXT. So if you want to have a look at GXT, how do you do that? Well, the easiest thing to do is to have a look at the homepage, of course, but if in case you already have access to the OSGO live, which is basically a project where many OSGO projects are bundled together in one self running system. There's also GXT on it. It has a project overview where you can learn basically the same things that we taught you here or teach you here. But there's also a nice quick quick start something of a tutorial. And when you do that, you'll create basically a demo application which looks like this. And then you can also see the thing that Chris mentioned with less vendor specifics in bought in baked into GXT because this one is fed by a map server backend. And previously due to historic reasons, many of the things inside of GXT were based on, well, obviously the standard but sometimes it had a flavor of the geo server vendor specifics that sometimes run into your project. So this one is based on map server and it shows quite a lot. I think in two hours or something you can complete this tutorial. It's on the OSGO live. So something else that belongs to the ecosystem are other libraries around GXT itself. So there's one one library that's called basics. It's a bit more higher level library than GXT. It has a focus on user interface components. So for example, nice forms to add WMS is and digitizing tools and stuff like that. It's on GitHub and you can have a look at the API docs or the code. Okay, so now there's another library that we will be Chris and me or says when we create projects on with GXT we sometimes we oftentimes we use Geo Styler when it comes to styling geo data and that's a question that we get asked often a task that we have to solve often. So Geo Styler is here an example of one great JavaScript library which doesn't have any ex tjs library dependencies, but it can be integrated greatly. Even in GXT or GXT based libraries. So once you decide to go with GXT you're not bound to it, but with modern JavaScript user interface libraries like Geo Styler and many others. It's easy to combine them and profit from all of those aspects. There's a couple of links here make sure to watch the talk by young. I think it's later today, but better check your schedule, because I don't have it in my mind. This is how it can look like this is a window coming from ex tjs and inside that window there are. Yeah, as you can see points being styled with different scales and several filters and so on and so forth. This is Geo Styler inside of this Geo XT component. Okay, so even more abstract is this compass informatics map view project, which is on GitHub. There's a nice link down here and you can have a look at the sources also. This can be used as a starting point or a blueprint for new applications that combine all these two XT basics and Geo Styler. There's also nice digitizing tools and cool layer three plugins, for example, but it can also be used in itself as a sort of a library so you can extend that further. We will see examples later in this talk how this can then be used to configure it furthermore. So this is how this thing looks so this is the CPSI demo system so it's a bit of generic so some of these tools here are coming from open layers some are from basics some are from Geo XT so this brings it all together. So it's a good resource if you want to learn how to put all the information together into one running application. So that was it for the ecosystem around you XT and how is these are these libraries being used in production applications we have picked a couple of production systems that we have helped to create and that are now in different stages of development. So this is a project that Compass has created. It's the public lighting system. It's there to, you know, manage over 300,000 public lights in Ireland, and the number one priority for this system was that it has a great and very reliable and stable and robust database and a nice mapping and yeah, this all stability was the top one thing. And this one uses Geo XT version four and each CJS version six so the newest versions of those libraries basically. This is how it looks like you see there are some street lights here that are being managed grid Google Maps integration and so on and so forth doesn't look so different from the CPSI map you project that we had a look on earlier. So the same thing is basically used. And well, for this other topic here where it's coming, or it's used to manage local and regional roads. And this is a long time user of GXT the first version uses the very first version of GXT and it has been updated continuously by Cess and others. Yeah, now uses also the newest version. This is how this one looks. You can see it's very well it's it's a bit table. Well, let's say forum based it's an internal application used not so much user century. So what this last application makes heavy use of is the WFS store that got some great additions and new features. So we have two way binding basically. So when you filter the grid, the layer gets filtered as well. You can page we have all seen a part of it. We have already seen it makes heavy use of this and nice feature also the export of these grids to shape file or Excel. So the next production application is the BFS geo patrol BFS is the German abbreviation for federal office for radiation protection. And well, the BFS uses it to publish its open data. It's a nice application where also a lot of different applications on sorry libraries are being included in combined so the charting, for example, is done with the three JS and so on and so forth. So you see this one looks a little bit different. But the trained I can see it's made with GXT. The last production example I want to show is the Malavad disaster management portal created mainly by Jacob mix, one of the, one of the colleagues of Chris from maximum and he gave a nice presentation in German at the German chapter conference foskies. There's also links to the sources and the running instance here. And if you open these links. This is how it looks like you see this is a nice cartographic touch to it, but also there's some other combination or some other components from GXT that you might recognize like the buttons and so on and so forth. So that's it to for these for these production settings. We come now to the last part of the presentation, which is the outlook. How will GXT continue to work in the future. So what do we plan. So we plan or we have non concrete plans of upgrading to each teacher version seven the GPL variant of that library. And we also want to well follow the path of open layer so when open layers gets new releases we want to follow them and try to be as recent as possible. So the base libraries when they change we want to, you know, like follow. We want to use probably an own packaging tool so that we don't know, we do no longer depend on the central command tool, which is great but it does it's not as cool as the newer dependency or, or other building mechanisms in the JavaScript world. Of course there's going to be a lot of bug fixing and maintenance. So for all these things to happen. We would very warmly welcome any funding. And so I think I added the beer Chris at the wine and the whiskey is of course from the Irish guy. So, in case you want to fund anything of that just contact us there's links in this presentation, we would be happy to do it but we cannot do it, you know, like on our own. So, the takeaway message so gxt of this community project of the osgeo community is alive and well, you can use it to create awesome application and if you want to have desktop like gis applications. You can, it's a it's a good, good choice probably so you can try it easily on osgeo live for example. Once you have done that we invite you to get more involved in the code and in the project, you can, you know, like subscribe to the mailing list, please fork our repo test it file box fix our currently broken examples some of them are broken right now, but either Chris and me will fix it or you will probably do it. With that said, please help us make the next version of gxt. Yeah, become the gxt version you want to have. And thank you for your. Yeah, thanks for everything. Dear listener of this virtual talk, and we hope you have any questions, Chris and me or at least one of us will be in the session now and are happy to take your answers. Please contact us at me set up Christian. Thank you very much. Thank you. All right, we are back. Thank you for for the presentation. We have. Sorry about that. So we, we have basically one question on the chat. And I'm going to put it here. So what what is the strategy of the project with the broad adoption of modern JavaScript frameworks like react and you learn and others. So first or shall I start feel free. I'll, I'll interrupt you as always. Yeah. Thank you for this question and for the proposing it. So, it's a bit expected when you create a talk about something that's based on of x.js that someone asks about react to angular of ujs or whatever. There's a couple of ideas that came to our mind by we were discussing this in the background, even while we saw that this this question was being asked. So this is a x.js has some, let's call them integrations with, I think it's angular j s and react. That is something that we will keep an eye on and to see where this leads. So this makes it easier to interact or integrate with applications written with those awesome JavaScript libraries. But we are not too confident right now, but it might change that this is the answer to all, you know, like future directions. A second point that I wanted to add is that so we saw in the presentation that it's, it's doable to combine this to combine your geo x t applications or ones that are built on basics or whatever with more modern applications or libraries. That's totally doable and might also be a good idea. So to focus more. And the third and last point from my side and then Chris can, you know, like contradict everything that I said and say switch to everything new, as soon as it comes is the last point is, we get a surprising amount of both Chris and his company and me and my company surprising amount of requests with regard to gxt. So gxt is used in environments which don't change that often. And that's right now we can still make good applications with it that do the job and do it well. And once you get the hook of it, it's also quite nice to work with the xdjs, even though it has some downsides. I'm not a fanboy or anything, but I'm choosing to switch, you know, like work right now and try to work will work for a time being Chris, please. Yes. Yeah, so so having those as wrapper components which integrate the the xd component library to to react or angular is not a real good idea for my just for my point of view. So if you want if we want to use react or angular, you just some of the modern hip guys in the JavaScript world, just use them and use them with their ecosystem. And if you have the needs for a let's say specialized technical desktop live application, maybe xdjs is your tool of choice, and then use it with its own component library. But if you're into into into react or view or any other thing, just just use those and do not try to to mix it, because you want to stick on the cool guy and have the light the ui library of the other thing which do not match in their patterns and stuff like that. So I'd say to choose one and use it for good for good reasons. That's the strategy I would follow and knowing that the JavaScript world is quite fast. It might have been that that's that's a view or angular is gone in a couple of years and the next guy comes around you have to do this integration work again because you have to stick on this and then combine with that. This is even the struggle we have with upgrading open layers and the xdjs version to get to xd on track that they are in sync with the base libraries and if I think of having react and all those other things within I couldn't sleep well I think Yeah, both me and Chris not always use gxt for all our things we use what fits best. Yeah, so we have different different tools in our in our stack and our portfolio so if you want to do something more modern. We have solutions for react and our view jazz for example, and then we choose those. Thank you for the answer we have one more question from the chat. How would you compare dox to other desktop like the applications out there. That's a good question. I say that to all the questions of course you already understood that. So, so which other desktop like GIS applications do you think of so I think being desktop like isn't the trend right now, like for some most, more disruptive, of course, is to look a little less like a desktop application that you use to know. So from the top of my head which other desktop like GIS applications come to your mind Chris. I asked myself if I read the question on the other screen idea. So, so that's not sure about it as you said, it's, I think it's easier to have a desktop like so even though you have some mobile first stuff, having this perfectly fit on the desktop is the more easier way than having it the other way around so. So I'm not sure how to answer this question to be honest. I would say you can create a very good desktop like GIS application if you want to, but one thing that most people will. We also didn't mention it too far. So you're not bound to that layout. So there's various options that you have to make your GXT app look a little less like a desktop in case you want that. I think sometimes we in this tech bubble, we forget that a lot of people have a lot of are used to a lot of desktop programs and not all GIS applications are written for the, you know, like JavaScript and I don't know web natives around there but also for some other target group for people that, you know, like work a lot with your standard this desktop tooling and for them, it might make sense to create an application that looks familiar to the things that they know that they know of. So if you put users first, it's probably also sometimes a good idea to have this. I wouldn't even say unpolished or old or outdated look but you know it's it can be made, you know, like desktop like and also nice and appealing. So I hope this answers the question if not please ask another one. Thank you. There's another one but we don't have enough time we are one minute before. So maybe this can be answered on the chat. Yeah, we will answer something on the chat. Thank you very much again for your presentation. You're welcome. Thank you for hosting us. Yeah, thank you for your work. And nice to have us here.
|
GeoExt is a JavaScript library combining the OpenLayers mapping library and the JavaScript framework ExtJS. It became an OSGeo community project in 2019. The talk will give a brief history of the project, and a summary of its dependencies and versions. Several new features recently developed for the latest GeoExt release will be presented. The talk will include an overview of two additional Open Source JavaScript libraries which bring even more power and functionality to GeoExt: BasiGX and GeoStyler. BasiGX is a higher-level JavaScript library that builds on top of GeoExt and focusses on advanced GIS user interfaces and mapping tools for the web. GeoStyler – in itself an OSGeo community project – is a JavaScript library for cartographic styling of geodata, and can be combined with a GeoExt solution to apply several formats to layers, e.g. SLD (Styled Layer Descriptor) files. The talk will include examples of real-world projects using GeoExt, along with recommendations on what types of projects are most suitable to be developed using GeoExt and its associated technologies. We'll discuss how and when newer OpenLayers and ExtJS versions will be supported, and how to combine GeoExt with other JavaScript packages. Finally a roadmap for the future of GeoExt will be outlined along with how developers and users can get involved. GeoExt is a JavaScript library combining the OpenLayers mapping library and the JavaScript framework ExtJS. It became an OSGeo community project in 2019. The talk will give a brief history of the project, and a summary of its dependencies and versions. Several new features recently developed for the latest GeoExt release will be presented. The talk will include an overview of two additional Open Source JavaScript libraries which bring even more power and functionality to GeoExt: BasiGX and GeoStyler. BasiGX is a higher-level JavaScript library that builds on top of GeoExt and focusses on advanced GIS user interfaces and mapping tools for the web. GeoStyler – in itself an OSGeo community project – is a JavaScript library for cartographic styling of geodata, and can be combined with a GeoExt solution to apply several formats to layers, e.g. SLD (Styled Layer Descriptor) files. The talk will include examples of real-world projects using GeoExt, along with recommendations on what types of projects are most suitable to be developed using GeoExt and its associated technologies. We'll discuss how and when newer OpenLayers and ExtJS versions will be supported, and how to combine GeoExt with other JavaScript packages. Finally a roadmap for the future of GeoExt will be outlined along with how developers and users can get involved. Authors and Affiliations – Marc Jansen, terrestris Seth Girvin, Compass Informatics Christian Mayer, meggsimum Track – Software Topic – Software status / state of the art Level – 1 - Principiants. No required specific knowledge is needed. Language of the Presentation – English
|
10.5446/57292 (DOI)
|
Alright, so yeah, welcome to the session. This is a session about Kegis. We've got some great talks lined up here. So my name is John Bryant. I'm based in Fremantle, Western Australia. I've been involved in organizing phosphor G events in Oceania, Australia, New Zealand, Pacific Islands for the last four years. And I work with open source geospatial software at my company, Mammoth Geospatial here in Fremantle. And yeah, super excited about the talks we've lined up today. We've got six talks from Kegis experts in Europe and North America, and they're going to talk to us about plugins and metadata, field data collection, COGS, AI, and about Kegis itself. And so first up, we've got Kurt Menke, who's going to talk to us about the very best new features of Kegis 3.x, which I'm sure is a highly anticipated talk by this audience. So the talk is pre-recorded, but Kurt's actually joining us here today in from Denmark, and he'll be answering questions at the end of the talk. So please add your questions to the VenniList platform. So yeah, I'll queue up the video here. And yeah, so Kurt's standing by, so add your questions to the chat or to the questions in the VenniList. So here we go. Hi, my name's Kurt Menke, and I'm going to be talking about the very best new features of Kegis 3.x, which is one of my favorite things to talk about. And for those of you who remember me being based in the United States, a lot changed during the pandemic for a lot of people, and it was the same for my wife and I. We decided to move to Denmark and moved here in January. And so I'm now based in Denmark, and I work for a fabulous open source geocompany named Septima based in Copenhagen. And I want to spend a little bit of time framing this talk. I'm coming at this in part as an author. So in the spring of 2019, I published the book on the left named Discover Q2 3.x, a workbook for classroom or independent study. And later that fall, I published the book on the right, Q2 Disperehydrological Applications, Recipes for Catchment Hydrology and Water Management, with my colleague Hans Vonderkwas. And one thing I do as an author is track new features to assess when I need to begin thinking about updating these books. So now I'm going to break out of this presentation for a second and bring up QGIS to finish framing this talk. OK, so here I am in QGIS, and I have layers here that represent the release schedule of all the versions of QGIS since 3.0, with long term releases, having their longer time frames, and then the other releases in between. And we have the indicator in the Indicator space that shows that these are temporal layers. And I'm going to activate the temporal controller panel and start animating the display with time. And so as I do so, you can see you get some nice time based symbology as each version is highlighted. And you can also see this is when Discover Q2 3x was released right then, right before 3.6 Nusa. And if I continue playing this, we're going to see a green box appear. And this is going to be the period that we're going to cover in this talk. We're going to start with features that were released about a year ago, and 3.14 was released, and then cover 3.16, 3.18, and 3.20 Nusa. So to prepare for this talk, I poured over the visual change logs and made a categorized list of all the biggest new features developed in the last calendar year. And it's simply impossible to cover all of it in one 20 minute talk. So I'm going to cover a mixture of features that are likely to be popular and some that just, I think, might be easily overlooked and need more attention. So we're going to start with the browser panel. And one of the nice features here is that with 3.16 Handover, the fields are exposed for layers that implement Connections API, so different database fields. And so with those, you can go in and right click in the browser panel and delete columns. You can also right click on the fields indicator and add new columns right from the browser panel. So this is not only give you a nice ability to see what kind of fields are contained in a layer, but also allow you to work with those fields a little bit as well. And for each one of these slides, I have a little banner up in the upper right that indicates which version of QGIS this feature was released. Moving on with the browser panel, it's also now possible to right click on a folder in the browser panel and set the color. So this is really useful to tag folders. It helps aid navigating complex folder structures and things like that and find important folders easy. A nice GUI related feature is that the Map Canvas now has a right click context menu. So currently, you can use it to capture coordinates or select features. And I'm sure there'll be more functionality added to this context menu in future releases. I think a really exciting feature in QGIS that has been developing over the last year is being able to navigate using the locator bar. So at QGIS 3.16 came the ability to paste an open street map formatted URL into the locator bar. And QGIS will zoom to that location at specified zoom level. You can also go to a coordinate, a coordinate pair separated by a comma. And in 3.20, the nominatum geocoder has been incorporated into QGIS Core. And you can now paste an address and interact with that nominatum geocoder through the locator bar. So to show you how this works, we have a URL from OpenStreetMap, paste it in there, hit Enter. And QGIS will zoom to that spot at that zoom level. This also works for other mapping APIs like Google Maps and OpenLayers. You can also go to coordinate. And here, I'm going to use the nominatum geocoder. I'm going to paste in the address of my office at Septimum in Copenhagen. It finds that. I select it. And it zooms to the point where my office is located in Copenhagen. Moving on to the attribute table, it's now possible to select some features and control how you open up the attribute table. So I can open up the attribute table now, looking at just the selected features on the map. I can choose to just open visible features in the attribute table. I can also choose edited new features. So here, I have selected and visible features showing up in the attribute table. So it's filtering it. So this is a really handy, convenient feature. And this is a lot of what I'm finding in the last year is Qt is just becoming more user friendly with lots of GUI related enhancements. One thing that's kind of been hidden, I think, in Qt is for years is color vision deficiency or color blindness previews. So on the left is what this looked like up through 3.16 with several simulated color blindness previews available. Now at Qt is 3.18, that's been replaced with the same methodology used in Chrome and Firefox with four different previews for different kinds of color deficiency. So to show you what those look like, here we have a map that has some greens and reds in it, which are often cause problems for color deficient map readers. And so we can see what these previews look like. This is really helpful when you're trying to design a map for a color blind or color vision deficient map reader. And you can use this to make sure features have enough contrast between them and the right information is being highlighted to someone who has one of these conditions. There's also several new renderers in Qtis. This interpolated line renderer introduced at 320 is a really nice one. And I'm going to show you one great use case, which is this is a map of a watershed or a catchment. And I have a stream layer with strailer orders. And I can use this renderer to make the stream network get thinner as it goes out to those lower orders, strailer orders, the tributaries. So to show you how this works, I have a little animation here. I can choose a symbol layer type instead of simple line. I've interpolated line. I'm then going to use the varying width option for this renderer. And I have a field in my attribute table for the order, the strailer order of these streams in this case. So I'll choose that for the start and end value. And then I have a little box I can check to get the minimum maximum value, set the minimum and maximum width, and then the color of the river network will make it a nice blue. And I quickly have nicely symbolized strailer order streams. Another interesting renderer is merged features. And this basically does a dissolve on a feature in the way it's being rendered. So here I have all the municipalities in Denmark. And I use this merged features renderer to make it just one solid color for the country of Denmark. This is what I imagine many have wished for. An SVG symbol search. So now when you go to try to find an SVG symbol, there's a little search box at the bottom that you can use with keywords. In this case, to find a nice train symbol and sift through all the different SVG symbols you have for keywords. One of the areas that's seen a lot of improvements in the last year is labeling, especially label callouts. So let's take a look at that. So here I have some airports with simple lines as callouts. I can also set those to be Manhattan lines. And now I can set those to also be curved lines. And with a curved line, there are data to find overrides for the label position and the beginning and end of the lines. I can also set the curvature of those. So I can set them up to be just as I need them to be. I can also now choose balloons as an option for callouts. And here I can choose any kind of fill style. I can set the margins for my labels. I can use the corner radius value to round the corners of those and the wedge width to control the wedge of that. And one nice new feature that's been added is also a context menu in the Layers panel that allows me to toggle my labels off and on for a layer. There's been other labeling improvements that I just don't have the time to go into, such as dashed lines. But I encourage you to go through the visual change logs and explore that. One of my favorite features in the last year is custom legend patches. So here's a map. And I'm going to set a contours patch for this contours layer. And you end up with a very intuitive legend using these legend patches. Another nice feature that is developed in the last year is gradient color ramps for legends. And so for elevation data, for example, it'll automatically put that in as a gradient ramp that you can modify the length, the width, the orientation, the labels of. And there's a blog post down here that I wrote on using Qtus Legend Patches. If you're curious about that, it's in English. And there's also a nice GitHub Qtus Legend Patch repo here where you can download user-contributed legend patches. And most of these came from that that you're looking at here. So of course, the style manager has also received some attention in the last year. So now there is a legend patch shapes tab that you can use to manage all your legend patch shapes. And you can have these for points, lines, or polygons. These can also be created from geometries of actual features. So that's a really nice aspect of this. And I cover that in the blog as well. Another style manager update is that there's a 3D symbols tab now. These don't get previews, but you see your symbols there. And there's now a Browse Online Styles button in the lower left that brings up the online repository for user-contributed styles. And you can download any of those and use them. So now I'm going to turn my attention to the print composer. There's now a setting in the item properties for your map object in the print composition that allows you to clip your map to a shape. So here I'm adding an ellipse to my map. And I'm going to then select the map. And on item properties for my map, there's now a Clipping Settings option. If I select that, first there's an option to handle clipping for Atlas features. But I'm going to choose Clip to Item and choose my ellipse here. And it's going to clip my map to that ellipse so that I can get an elliptical shape map instead of a square. Continue with the print composer. Another nice feature just introduced is Dynamic Text. And this is something that was always possible with expressions and queges. But now there's a nice dynamic text drop down with lots of options. So I can, for example, go down and add the current date. And there's some common formats there. You can modify those by modifying the expression. But it sticks it in for you so that you can, if you want to modify it from there, you can. There's lots of other options on that Dynamic Text drop down. Moving on to processing. If I go to the history of processing, processing history is now organized or grouped into time periods. And there's also icons in there for the different kinds of tools that were used. So this makes it a lot easier to find that raster algorithm you ran last week. Another processing feature that I kind of missed, this came out with 3.14, is that you can append output to an existing layer. So there's all the other options there. But now with processing algorithms, you can just append the output to an existing layer, which is, I think, a really exciting feature. There's, of course, a lot of new algorithms in processing that are not just in the existing layer. But in the existing layer. There's, of course, a lot of new algorithms in processing over the last year. Cell statistics is one I think is very nice. There's one for the nominatum geocoder, which I discussed earlier, for batch processing addresses, export to spreadsheet. In the expression engine, there's been several nice new expressions. And like main angle, here I'm also showing it with 3.14 came the ability to preview the results of an expression to see how that's working. And one thing that just happened recently is in map layers, there is now icons that indicate the kind of layer that that is, whether it's raster, table, point, polygon line, et cetera. And probably one of the most exciting features in the last year is point cloud support. So in QGIS, if you open up the data source manager, you can choose an EFT file, which would be a cloud-based point cloud, or a file-based LAS, LAZ file, and bring either of those into QGIS. Here I'm going to cover point cloud rendering. They often come in with just the extent, which can be nice to show where the point cloud is located. Here I'm changing it to attribute by ramp, which I've chosen to be the z value or elevation. Here I'm choosing the RGB renderer, which in this case has RGB colors from an air photo. And finally, the classification attribute, which classifies the point cloud into different objects. So there's quite a few different ways that you can choose to render your point cloud data. Again, we have attribute by ramp, RGB, and classification all available. It's also possible to bring your point clouds into 3D. There is a 3D rendering tab here that you need to be cognizant of. You need to set the 3D rendering of your point cloud independent of the 2D. But then it will come into 3D already recognizing the elevation. And you can then start working with that scene to create fly-throughs and 3D scenes. So it looks fantastic. And it performs really well even with large point cloud data sets. There's also been quite a few enhancements to the 3D environment in general. So here I'm looking at the LiDAR data set and choosing to show shadows, which can really help bring out some of the features as can iDome lighting. So these are two different techniques you can use to try to help distinguish features a little bit more in a 3D scene. It's also now possible to export your 3D scene so that you can bring in that data into a program like Blender to work with it more. And there's been other changes in the 3D environment as well, but these are some of the most probably commonly used and powerful ones that I've worked with so far. OK, so the last thing I want to cover is not a new feature, it's a plug-in. I think it's a notable one. It's the raster attribute table plug-in. And this allows you to open up and work with raster attribute tables within QGIS. So here, this is land use, land cover data that I have. And when I have an attribute table for a layer as a side car DBF, for example, I can right click on that layer in QGIS and it will automatically pick up on it and it'll give me the option of opening the attribute table for that raster. So in this case, I have red, green, and blue values that QGIS recognizes. And I can use this plug-in also to classify my data. So here I have class names and I click classify. And it switches from the simple black to white color ramp to a palleted unique values rendering with all the colors stored in the attribute table of the raster, giving me a really nice rendering. So this is a fantastic new plug-in that I hope one day will make it into QGIS Core. Thanks. That's my talk. I'm sure that there are features that people are interested in that I didn't have time to get to because there's just so many and so many uses of QGIS. So if we have time for questions, maybe we can get into some of that. This is my contact information. Again, I work for Septima. There's our website located in Copenhagen, Denmark. And I can be reached at Kurt at septima.dk and on Twitter at Geomenke. Thanks so much. All right. Thanks, Kurt. You're on mute there if you want to unmute yourself. Yep. We can get into some of the questions. Yeah, thanks a lot. That's amazing. There's so many new features that's pretty wild. I don't know how you keep up with it all, to be honest. Yeah. Yeah, it's a challenge for sure. Yeah. So yeah, let's just jump into some of the questions. So question number one is, with the new Raster Color Ramp Legends, Raster Color Ramp Legends, is there a way to label along the ramps? You don't just have a min and max value shown, but values shown along the color ramp. Oh, boy. That's a good question. I've only seen the option for min and max so far. And I'd have to dig into it again to see if that's in there somewhere. I don't think so, but it may be. OK. So next question is, nice point cloud features. How to handle big point cloud data in QGIS. Any thoughts about that? Well, one thing QGIS does when you bring in any point cloud data set that's in LAZ or LAS format, it converts it to EBT. And EBT is designed to handle streaming cloud-based point cloud data sets. And I think that might be, as part of the reason QGIS is handling large point cloud data sets so well, is I think putting them in that EBT format is the key. Yeah. OK. Is the raster attribute table saveable to the GeoTIFF RAT? Raster attribute table? I think it recognizes, as far as I've seen, sidecar DBFs that have the same prefix. So I don't think that is possible. It just recognizes a sidecar that's sitting alongside the raster. So I don't believe it will do that. OK. What else have we got here? It's good questions. Has QGIS now the possibility to create an automated overview map of print layouts? I'm not sure exactly what you mean by automated. But it is possible in a print composition to have an overview map, a second map in your print composition, and use map themes to control so that you have one overview map and one main map. And then you can put an overview there to highlight the area covered by the main map in the locator. So that's been possible for quite a while now. Yeah. Here's a question. It's a good one. How do you keep up with all these changes between your feature talks? Where do you find out about them? How do people stay in touch with what's the latest and greatest in QGIS? Well, I think one thing that I do is, like I said at the beginning, go through the visual change logs when there's a new version. So in a month or so, we're going to have 322 coming out. So I'll, again, go through the visual change logs and see what the new features are that pertain to the work I do, especially. So that's what I recommend. And then, obviously, all the normal channels for staying in touch with things, Twitter and other online venues that all participate. Yeah, there's so many channels. So many channels. But the change logs, I think, are the key. Yeah. Here's a question that's gotten a lot of votes. How about the 4.0 release? And also, I mean, I'm wondering, even the upcoming long-term release, any 4.0 first and then, I guess, the upcoming one as well. Yeah, I haven't seen any news on 4.0. That would probably be due to some kind of API break with another version of QT. And I know there's been some discussions around issues related to QT. I don't know if any of that will evolve towards a 4.0 release in the next year or so or not. I haven't heard. So I don't think it's imminent. It will probably be the next couple of years, I would imagine. There'll be a 4.0. So how about upcoming releases? Any features you've got your eye on? No, I actually, honestly, haven't had time to look at what's coming out in 3.22 yet. I haven't really just spent the required time to dig into GitHub and see what's going on. Fair enough. Yeah, we've got another minute or so here before we better start wrapping it up, but maybe one more. Is there a command to make las file to Ept? So is this something you can do manually in QGIS? QGIS automatically converts it to Ept when you bring it in. So you'll see an Ept folder in there when you add that. So it happens already. Yeah, OK, so it's automatic. Yeah. I would say that there are, just to wrap up, there's a couple of things I didn't have time to cover that maybe I can talk about now. Yeah, we've got a minute. One being you can right click on a layer now, and there's an option to open up layer notes. And you can have notes about that layer stored in the project. And you can format them with all the normal formatting tools. That's kind of a cool feature. And I know there's also been a lot of work with metadata. So QGIS now automatically sucks in metadata from shape files and file geodatabases into a QGIS-formatted metadata. So I think those are some really nice features, along with if anyone uses mesh data, there's a ton of new algorithms for mesh, exporting mesh into various formats. There are various aspects of mesh data, and also the ability to create tins now in QGIS. Cool. Yeah. OK, well, thanks a lot, Kurt. Very much appreciated. That was a great talk. And yeah, thanks again. All right, see you in the next one. See ya.
|
This presentation will give a visual overview of the major new improvements of QGIS 3.x over the last calendar year QGIS releases three new versions per year. With each there is a long list of new features. This presentation will give a visual overview of some of the best new features released over the last calendar year. Examples or short demonstrations will be included. Potential topics include: User interface * Symbology - renderers and labeling * the Temporal controller * Print composer * Improvements in the expression engine * Digitizing * New processing algorithms * Graphical modeler * QGIS 3D * Data providers and support for Mesh data and Point clouds. Come and learn about how far QGIS has evolved in the last year! Authors and Affiliations – Kurt Menke - Septima P/S, Copenhagen, Denmark Requirements for the Attendees – This talk will be of interest to many from casual users of QGIS to seasoned professionals. Track – Software Topic – Software status / state of the art Level – 2 - Basic. General basic knowledge is required. Language of the Presentation – English
|
10.5446/57293 (DOI)
|
So here we have Tom James Banting and Tom Christian from SparkGEO. In the meantime you could try to share your screen. I think James will be sharing his screen. You can hear me. Okay. And let's see. Well, they will provide a talk called There and Back Again. Lessons learned in transitioning from GeoServer to Metproxy. Again Metproxy, that's good to hear. So it will be a dual presentation. And James and Tom, they are from SparkGEO, a very innovative company from British Columbia, Canada. And in short, SparkGEO helps customers to make sense of geospatial data and maps, providing analytics insights and development support. And James here is the Vice President of Research for SparkGEO. And he is a remote sensing scientist with a background in Geography. And Tom here is a full stack developer, of course also from SparkGEO with emphasis on front end web development. I'm a strong background in geospatial technologies. So in the meantime I see your screen is shared. And I'll give the floor to you both. And we'll see each other later. Okay. Excuse me, I'm not hearing you, James. Is that better? Yeah, that's much better. Yeah. Okay, thank you. Very good. Yeah. So I'm James and my colleague Tom is here presenting on Maproxy and GeoServer. We did a project pre-pandemic about GeoServer and Maproxy. So we'll get into that today. Before we do, I'd just like to acknowledge that today in Canada is the National Day for Truth and Reconciliation. This is where we kind of honour the families lost during the residential schools here in Canada. It's kind of fitting for Indigenous workshop stuff that we're doing at FosWorG. So I'd just like to acknowledge that before we get into it. Yeah, as I mentioned, we're doing GeoServer and Maproxy pre-pandemic. We kind of started with GeoServer. We had a project where we had to move a lot of customers from one managed service onto another managed service. The older product, it was pretty old and a lot of customers had built workflows around some of the nuances in this product. So we found out that we had to address some of these nuances with our GeoServer implementation, which didn't exactly suit our needs, so we had to do a little of massaging of GeoServer and other products. I think the other project was based on MapServer, so it was an OGC thing. They had not full OGC support, but pretty close. And then they also had kind of an Esri rest implementation as well. So it was a lot to move from one managed service to the other. One of the hurdles we encountered with bringing in GeoServer right off the bat was the nuances that the customers had. So there was a lack of understanding of, yes, the customers say they need WMS, but what exactly does that mean? So we had a plan to attack GeoServer and as all plans do, they go awry. So what we picked off right at the beginning was default vanilla GeoServer, and this satisfied probably 80% of the needs for us, so it was great. We could obviously do a lot of the performance tuning that is required on GeoServer. The docs on GeoServer are good for that. There's a lot on performance tuning, and it helps quite a bit. One of the shortcomings we did have, we tried to use the REST API quite a bit, and I'm not a Java developer, and I have to go down to Java to see what some of these REST calls are doing, and it's never fun for me as a remote sensing taskist to go into Java world. Individually these extensions played really well with GeoServer together. They didn't work so well. We had a lot of issues with auth coming through for GeoWebCache, a lot of stuff with Geofence and making sure that we can keep the auth for regional masking. Again, we had a plan, and these were the best ways that we could immediately address the issues in those plans, and GeoServer was tried and proved, and it was the best solution for us. So GeoServer, we struggled with getting it after a certain point. It becomes difficult to manage. Auth is always a challenge. GeoServer comes with basic auth out of the back, out of the bag. We used some key modules. We did some header stuff. Some of these clients, as I mentioned, were built their applications on an older tech, so basic auth. GeoServer insecure it is, the answer is very, was one of the requirements that we had to do. So we had to modify the auth stuff. Some of these people had token parameters in the query. Some had it in the path. It was kind of a fun game for one of our developers to go through and find out exactly how people are authenticating. We got to play with projections quite a bit. A lot of these customers were using datums that were kind of frustrating to work with, especially NAD27. In Canada, this was used quite a bit for oil and gas, and through my career, I've come across NAD27 a lot, and I've come to really hate NAD27 just because of the datum shifts that happen. Caching was an issue for us. We set up GeoWebCache, and GeoWebCache was working great for us. We didn't have the customization we needed to do, so we looked at something else. And then for WMS, this was one of the biggest hurdles. Some of our clients were using WMS 1.0, some were using 1.1, some were using 1.3. And the switch between 1.1 and 1.3 has this XY swapping positions a lot of the time. And this is GeoNerds versus MathNerds. But this was a big hurdle we had to overcome, is getting to identify the CRS for each one or the EPSG and make sure that we're doing swaps if it's WMS. Again, we had to modify the plan according to real-world situations. It was doable. It was just frustrating. So we had one GeoServer instance running up, and customers keep coming on. And GeoServer was doing fine. It was starting to handle the load. And then it's always a DevOps problem. We had to scale quite a bit. And as people have talked in other FosWordGees, and there's a talk, I believe this year on scaling FosWordGee in kind of a distributed environment, we needed to do something like that. And it was very frustrating. We had kind of separated our development process into auth and, pardon me, we had separated our development process. So we had auth, we had Geo, we had caching, and we were trying to keep them kind of separate and be able to spin up independently in a distributed environment. So DevOps was required to make GeoServer really sing or at least make the other parts compensate for GeoServer. Oh, my God, the little map proxy logos. Well, you get it in stereo. So we brought in map proxy. This was helped to, this was brought in to help a lot with the Geo web caching issues with some authentication going through there. It allowed us a whole ton of flexibility, as was previously mentioned. It's written in Python. So this is really good for us. We're predominantly a Python shop and it makes development a lot easier. We can bring in a whole bunch of different tools. One of the big things that Tom will get into a minute is the need to dynamically generate configuration scripts for map proxy. And one of the other things that Tom will get in is authentication around map proxy and how that kind of module works and how we address that. So we, again, had to modify the plan, this time we brought in newer, kind of well-tested software to augment older, well-tested, well, I shouldn't say that, older, well-developed software. So I kind of shoehorned Tom into letting me have an added 27 rant here. So datums are very important for us in the Geo world if we're pushing out data and someone wants a specific projection, specifically, let's say, oil and gas, not specifically. Let's say an oil and gas example, a customer wants a projection in a UTM coordinate, but their base data is made off NAD 27, North American datum from 1927. And as the earth does, things move. So people are still using that 27 as a datum to reference real-world objects. It's very frustrating as a Geo to have this. So that's the end of my rant, and I will transition over to Tom now. All right, thanks, James. So as James mentioned, I'm just going to dig into some of the details on some of our implementation, particularly, I'm going to start looking at auth here. And in this context, auth is both authentication and authorization. And this is something that we had to implement with a Maproxy work, because obviously Geo server comes with a fairly rich set of configuration options around auth. But with Maproxy, we were kind of a little bit more on our own, producing our own setup. So broadly speaking, our goals were to ensure that if a WMS or the gap capabilities request comes in, then the response will only include those layers that are authorized for the caller. And so we're not showing anyone data that they cannot access. And then secondly, that if someone issues a getMap request, that we're not returning any data for which they're not authorized if they manage to get a layer name from outside of the getCapabilities document. Or if they request a mix of data for which they are and are not authorized, then we filter out the data that they're not supposed to have. And a nice thing about the approach that we took with Maproxy is that we, caching is transparent from the rest of our system. So because Maproxy manages auth for us and Maproxy manages the cache, we don't have to worry about who has access to the cache, which I think was one of the issues that we ran up against with Geo Web Cache. So on my next slide, I've just got a basic architecture diagram. So I'll spend a minute or two talking through this. So on the top left, we've got GIS client. So let's think of this as QGIS or ArcGIS desktop or any number of proprietary applications. And so this is issuing, for example, a getMapRequest. And that request hits our REST API first. So this is a custom API that we developed using a fast API, an excellent Python API development framework. And so this is kind of our proxy. This is our layer that we stick in front of Maproxy to manage auth for us, among other things. So within that API, we first have an authentication check. So this is clearly just making sure that the caller is who they say they are. And then we get into the meat of it, which is the layer authorization check. So we have an authorization DB. And this is configured with layer names, user names, and layer grouping, so we can give users access to any combination of layers. So we go to the database and we say, tell me every layer that this caller is allowed to have access to. That comes back. And then we bundle that up into a JWT JSON web token. This is an encoded string that we send as an HTTP header when we make our HTTP request over to Maproxy. So Maproxy receives the request that's been forwarded on from our API. And it includes a JWT that sells Maproxy. This caller is allowed to see this list of layers. We tell it to Maproxy and Maproxy does its thing to determine whether or not the request has access to the layers that it's requested. Next we jump over to S3, which is image cache. Just see if we have the images already cached. If we don't, then we push that request up to our upstream WMS or WMTS services, which are either managed by us or managed by the customer that we're doing this work for. And so the whole kind of auth piece here is really communicated through that JSON web token that we passed to Maproxy. And that's kind of the logic of that is wrapped up in a filter that we register with Maproxy. When it starts up, that we have to write. On the bottom right, I've just got a small example of a contracted example of what that JWT looks like. It's pretty straightforward, especially if you're already familiar with JSON web tokens. We're just listing the layers. And then just in the bottom left, grayed out there because it's not completely relevant, I just wanted to add the context. We also have a UI that we built to allow the customer to configure who has access to which layers. So that's a fairly simple UI, but it just allows the admin to say user A has access to layers, one, two, three, user B has access to layer two. And that obviously feeds into the auth DB. So next up, I think I'm talking about caching. So yeah, as James mentioned, within JWT server, we were using a GEO web cache to manage our cache. But now that we're moving over to Maproxy, we're letting Maproxy do the bulk of the work and the cache itself is stored on AWS S3. In line with Maproxy's recommendations, we configure that with a particular directory structure because we are expecting to see a very large number of requests per second in some scenarios. And that's part of the advice to avoid issues with AWS. Maproxy also suggests that whatever CRS we expect most image requests to be made in, we cache in a grid suitable for that CRS because although Maproxy can do on the fly reprojections, so let's say we have a cache in EPSG 3857 and a request comes in in 4326, Maproxy will do that reprojection on the fly for us, but we obviously want to minimize the number of times that happens for the overhead and for any potential projection issues. As James mentioned as well, we had a lot of issues with NAD27 or just a lot of fun working around its peculiarities. So one of the things that we do is we store a completely separate cache for each layer in this based on NAD27 so that if we do have to do that on the fly reprojection to a NAD27 based projection, we can do it off the NAD27 cache. So we're not forcing that data shift. So we obviously increases the volume of data that we're caching, but it allows us to sidestep some of those issues during reprojection. Next up I'm talking about Maproxy configuration. So for anyone not familiar with Maproxy, the system is configured through YAML configuration files and the way that we approach that configuration is a separate YAML file for each of the services that we expose. I don't really have time to get into why we separated out to those three services in the upper right. But essentially we have to produce these YAML files and then when that YAML configuration file changes, Maproxy reads it in and reconfigures itself. We also produce a SEED configuration file because we have background SEED tasks that populate the cache for us for any image tiles that have not already been requested by user requests. So we have a process that just watches for configuration changes in our config database. When it detects that something has changed, we have a custom Python module that reads to the config DB, generates a whole round of new YAML files and then dumps those into the directory that Maproxy reads that configuration from. So on the next slide I just talk about some of the issues we encountered with that. Part of it is because we're using S3 for our cache, one of the behaviors that we found with Maproxy is when Maproxy is given a new YAML configuration file, it immediately reconfigures the services. But when you're using S3, what that means is the first getMap request that comes in after a reconfiguration, Maproxy essentially says, I don't know if I have access to these S3 buckets and directories. And so it goes through a process of firing off HTTP head requests to a large number of end points just to see where it has access to within S3. And I believe just from looking through the code that the intent here is to throw an error if an unauthorized response comes back. But assuming everything is good and even you do have access, this is just a very simple process. It's just a very large amount of traffic that doesn't serve an immediately useful purpose for us. But that first getMap request has to wait for all those requests to complete. We have a large number of layers, a large number of grids, and a large number of zoom levels that we cache to. So this can be thousands and thousands of HTTP head requests. So some of our production users can be waiting several minutes for that first map to come back. So that's one of the issues we ran into and we're kind of addressing that by requiring that we don't really reconfigure during core business hours. The YAML format that Configure that Map Proxy expects is a little bit finicky, very particular about the formatting of certain elements and indentations. So that was a little bit of a hurdle for us to get over to understand that. And then the improvements that we'd like to make if we were to revisit this or take on another project like this. I'd like to make that S3 check with the large number of HTTP head requests optional so we don't always have to hit the buckets in S3 with a large amount of traffic. But I'd also really like to see Map Proxy with a configuration API rather than relying on these YAML configuration files so that when our configuration changes, we don't have to regenerate these YAML files every time and have Map Proxy read them in. It'd be really nice if we could just hit Map Proxy directly and say, this is the configuration through an API. So I think that's everything I wanted to talk about with Map Proxy. Yeah, cool. Thanks, Tom. So kind of general lessons learned from this. GeoServer is still very powerful and still very hard. There's a lot of good companies and very good developers who know how to make this thing sing. Map Proxy is very easy to get up and running. It's relatively easy for devs to approach it. And it works well in a distributed system which we want to play with. The auth filter in any kind of middleware there is pretty easy. I don't want to speak for Tom developing it, but it was fairly easy to develop against. So GeoServer as a back end with Map Proxy sitting on front controlling all the access and everything is a win. We actually have some customers who are still using GeoServer in the application. One of the other lessons learned is when you're filtering for blog posts and you're doing a time filter, go back, GeoServer is old, go back way before the land, before time, and find out what little tricks has developers made to get around GeoServer's nuances. So look back in the research. There's a lot of stuff there. As I mentioned at the beginning, the old service was doing native registry rest. So we had this request come through. We actually ended up working with Esri to be able to support this. And there was some finicky stuff we got to chat with some of their developers on their image server team. So that was very good. But we were able to get native Esri support from OGC stuff. That's it. That's the presentation. I'm James Banting and my colleague Tom Christian who did kind of all the work on this project is here as well. These are our contact stuff. So thanks for coming to our presentation and we're available for any questions. Okay. Thanks James and Tom. Yeah, very interesting to see how you do with Integrate for instance, Maproxy. Let's see. We have one question and maybe questions come in while you're answering. I'll put the question on screen. It's a short question but probably you will know how to answer this one. Yeah. It's a reverse TMS structure. Sorry, I'm reading it also for the listeners. I can speak a little bit about that. It's essentially just, it's probably best described by the Maproxy documentation but it's essentially just a particular ordering of nested directories. So the way that Maproxy will by default structure is cache can lead to a high level of nesting directories within directories within directories. And although I don't recall all the details because this project was a little while back, there is some behavior in AWS's S3 storage service that if you have a large number of requests per second going through that heavily nested directory structure, it starts to get upset and I think you start to see some service performance issues. But yeah, the reverse structure is essentially just changes. So rather than, I think the reverse structure ultimately starts with the zoom level, the zoom level is the last directory, it goes Y, X, Z rather than a different ordering. And essentially it just changes the degree of nesting that you have within your cache. Yeah, so rather than storing the zoom level last within X, Y, Z, you have zoom level first to look through the zooms in that kind of thing. Okay, yeah, since we have some time and I happen to be also a Maproxy user for years, because at some point I didn't use this file storage anymore and I switched over to geo package, storing tiles in geo package. And that saved me quite some problems, especially when moving, let's say from a test to a production server, basically you have one file with a geo package and it's quite efficient. So why should, well, but of course if you have S3 you have these remote calls and then probably geo package doesn't work that well. Yeah, we, so maybe you can comment. Yeah, absolutely. We had, so we had some requirements for vector products that were being drawn as rasters, so we had to put up SLDs and some of these vector products were image footprints. So there's a lot of them, they cover wide areas. So geo package worked well for a bunch of those. We loaded those up into geo server as well at the beginning because we had to throw around vectors and everyone loves shapefiles, so we went with geo package. It worked out well for us in that regard, but for the size of work we need to do, dumping all our tiles and stuff in geo package wouldn't have worked for us. Yeah, and I have another thought on that as well. So my previous experience with caching large numbers of files is one of the best reasons to avoid, one of the best reasons to package a large number of files in something like geo package is to avoid some issues you can see on with, for example, a hard drive running out of IOs, like hitting its IOs limit where you simply have too many files and the hard drives addressing system can't handle it. And so that's where something like geo package or like the MB tiles format can be really useful in just collating those things into a single file. And as you mentioned, use the transferring of files between environments becomes a lot easier. I think for us because we're using S3, those kinds of hardware related problems just go out the window, you know, we don't care how many files we're storing on S3. And if storing those files individually means that we avoid the overhead of packaging and unpackaging in and out of the kind of collated format, then that saves us a tiny bit of performance every single time. So I think in a different environment, having such a large number of individual files could be a real problem, but just given how we're storing our data and how we're using S3, it just stops being our problem. My approach has quite some possibilities in that respect. I know the Dutch National Geographic Infrastructure, they even use the Couch TV or use the Couch TV back end also from my proxy. But I see in the meantime, we have, wow, I think the question came also up in the previous, so there's another question here. Do you know if Maproxy can cache Maproxy vector files? So I was curious about the vector aspect of Maproxy and going through old emails from Maproxy group. It doesn't explicitly do vector tiles, but my understanding is that the hookups for it, for your own vector implementation are there. I don't think it caches vector tiles so often out of the box. Yeah, it came up also in the previous talk and of course this is open source software and it's also a matter of funding. And yeah, I happen to be also in the Project Steering Committee for Maproxy and yeah, what you'd rather see is this type of contribution and yeah, it's a long running issue. Next question is, can Maproxy be run in a container and I guess a Docker container? Yeah, actually, so we were looking at the Netherlands, Dutch, I think it was an environmental agency and they, PDock, PDOK is the group. Yeah, that's very familiar to me. They've done a great job of packaging up Maproxy into a Docker container with some hooks if you're deploying it in Kubernetes or if you're just running it locally. So have a look at PDOK on GitHub. They have a lot of great resources that they push out there. Yeah, so I followed up that from PDOK and have even bundled Maproxy with MapServer in single image because Maproxy can call MapServer as a library. So you don't even need back end WMS. We actually did that too. Tom was writing some of this in Maproxy. So we have one minute, sorry. I'm getting over-antigestic here. Let's see, we have probably time for one last question. Yeah, I have to move switch between environments here. That's maybe a long question.
|
The hurdles we encountered when transitioning a project from a single GeoServer instance to an autoscaling MapProxy system while trying to mimic existing functionality. The hurdles we encountered when transitioning a project from a single GeoServer instance to an autoscaling MapProxy system while trying to mimic existing functionality. Authors and Affiliations – James Banting, Sparkgeo Tom Christian, Sparkgeo Track – Use cases & applications Topic – Software/Project development Level – 2 - Basic. General basic knowledge is required. Language of the Presentation – English
|
10.5446/57294 (DOI)
|
Yeah, he's joining us. Okay, great. Yeah, and even if we begin, I can add him in the middle as soon as I see him, I can just add him in. Okay, perfect. Great. So we'll wait one more minute and then I'll introduce you to the team. The only other thing, just in case you happen to be watching it on the attendee app, which I do, that's how I see the questions, is there's a 15 second delay. So if you look over and you see the wrong slide, that's all that's happening. So do you pronounce your last name? Obukov? Obukov. Obukov. Okay, great. Okay, well let's get started. Good morning to those of you in North America and Latin America. Good afternoon to the Europeans and good evening to the Asia people. Welcome to the Puerto Iguazo room for our fifth session of this morning. I'm going to hand it over to Timur Obukov and Luis Bermudez. Let me add Luis right now, I see him. Good evening to the Asian people. Welcome to the Puerto Iguazo room for our fifth session. Whoever has the session on, please mute the main session. Sorry about that. And Luis Bermudez who talked about the hybrid GIS architecture that the UN Open GIS initiative has implemented. Take it away fellas and we'll wrap up in about 20 minutes and take questions. Thank you Michael. So my name is Timur, Timur Obukov. I'm going to present our pilot project that we did together with Joe Solutions in the United Nations and also with the Karin Research Institute for Human Settlements. So a couple of words about the UN. The UN United Nations system is very large. As you see, they are not only UN secretaries but also various agencies, funds and programs. And all these funds and programs are focusing on various requirements, various needs. So for example, OHCHR is focusing on human rights. UNAP, United Nations Environmental Program is focusing on environment. WFP is focusing on food security. FAO is focusing on agriculture. United Nations Secretariat, that's where I work, is actually focusing on peace and security. That's one of the responsibilities. We all focus on different issues and needs but we need access to the same technology and open source technology that's available at the moment. So access to these platforms can basically provide systems, provide knowledge, provide mobile GIS applications, various data collection tools, access to data to open satellite imagery, to baseline data and so forth. So as you see, it's even though that we're focusing on different issues in the United Nations system but we need the same thing. We need capacity, we need software, we need data. And UN Open GIS initiative, that's the initiative that was established in 2016. So they provided this opportunity to address just special information requirements to the wider UN community. And also to extend those systems, extend those applications tools and software to the academia, to developing countries and so forth. So I can give you a couple of examples. For example, in South Sudan and Juba, United Nations Mission, South Sudan is also providing QGIS training to the Juba University. So it's basically we assisting local and host countries in developing GIS infrastructure and GIS systems. And also different NGOs and humanitarian communities can also benefit from the development of the systems because we are going to share, we're sharing also with them the software and the systems that we develop. So the requirements of the United Nations for hyper GIS architecture were basic requirements of the UN for GIS where UN enables situation awareness platforms. These are the platforms that assist the decision making processes. It provides the just specialization for improved situation awareness because you can imagine that the situation, the UN and situation in the world and the places where we work is quite varied. So we need to collect data, we need to process data, we need to integrate different silos, different systems into one platform that would support decision making processes and situation awareness. Also, GIS supports the fulfillment of common dates, whether it's ceasefire monitoring or monitoring of armed groups or protection of civilians and electoral assistance in certain missions. We also assist in saving lives and support emergency operations, whether identify the locations for search and rescue, crisis management, evacuation, go-no-go areas, locations where the minefields are present, where the mines are present so that we can inform the communities and form our colleagues about the danger in those locations. Also GIS for the UN enables the cost-effective operations. So by using GIS, by using satellite imagery and image intelligence and various GIS techniques and various GIS solutions, we minimize our ground visits at the planning and operational stages and also it will provide us with a better understanding of operational environment and some specific projects such as groundwork exploration where GIS plays an important role. The situation that we had in the UN, so first of all, UN is running, UN is running, started developing GIS infrastructure in early 2000. So we are running GIS for almost two decades. And we started running GIS infrastructure based on the software and based on the solutions that were available at the time. So at the time where the proprietary solutions were available and basically for two decades we were building our infrastructure on proprietary solutions. So but of course, you know, just it kept us away from just because of high cost of licenses and high cost of running the system, it helped, it prevented us from provided limited options for scalability for mainstreaming and also different transfer of capacity and technology to the host nations. For example, before New York I used to work in Istimor and when we had to deliver, we had to transfer our GIS infrastructure to the host country. We had to also transfer the proprietary software and quite often the host nations just simply don't have capacity and don't have funds to run the system because it's quite expensive. So in this case, and also the, I would say from 2014 to 2016 we started looking closely to the open source technology, what open source can provide and how we can complement the systems that we have with open source. And of course, that would support operational and technical demands of the United Nations, no licenses optimizing the cost running of GIS infrastructure, flexibility in terms of streamlining, scalability, interoperability, innovations, slight footprint on IT infrastructure and so forth. We saw great words, but what we're going to do is the legacy system. That was the main question because the system, our system is built on proprietary and some of our clients still demanding some services provided on proprietary solutions. And one of some of our colleagues from WFP, Francesco, he proposed the options that they were looking at in WorldFood Program to develop a hybrid GIS architecture. So it's basically where we have integrated geodatabase systems that geodatabases that would serve both proprietary and open source solutions. Most of the systems will complement. So for example, the components that are very expensive on proprietary software can be replaced with open source. It would provide us cost-effective options. It will also help us to scale up and also to mainstream GIS because now GIS is becoming one of the systems that everyone requires and having issues with licensing and so forth would quite often prevent us from mainstreaming and scaling up the solutions, GIS solutions. And also it provides social value of the systems. We can provide GIS services to the government, host government and so forth. In this case, hybrid GIS infrastructure will also provide the great benefit in building technology-independent platforms. For some of our main platforms that we run in the UN, it's image intelligence applications for analysis of satellite images, whether it's SAR technology or optical, for GIS solutions such as Geoportal, UniteAware, Geostory, MobileJS and various data services that we also have in the UN, such as UniteMaps, OpenDrawnMaps, SecondLabel AdministrativeBoundary and various geo-products for analytics, for thematic mapping and so forth. With it, quite non-quantative impact assessment, impact analysis of running hybrid versus proprietary. As you see, the hybrid here is winning on all stages. It's again, it's not quantifiable. The only thing that's where hybrid will be lacking is legacy and user base. Also just because we're moving to open source technology and hybrid technology would also require to develop user base capacity of our colleagues and also just to migrate and transfer the systems and the data from the legacy systems, from proprietary systems into open source or hybrid. There's also stability because proprietary software quite often comes with this packaged systems where everything is orchestrated, everything is run smoothly, but with open source technology and with hybrid technology, we always need to integrate various things together and to make sure that they run perfectly fine. Just to roll out with this idea with our systems, so we developed the pilot implementation plan, pilot project. It was to prove the concept of hybrid JS prototype and support, first of all, for our backend system, for your night map and also for Joe Portal. You can see it here, number one and number two. The duration of the pilot project was six months, but we run it a little bit longer. Contributors for this pilot project was Chris, Korean Research Institute for Human Settlements. They provided the funding, UNJS, UNJSpecial, UNJC, Global Support Center in Brindisi and WFP with provision of their ideas and concepts. Those implementation was done by Joe Solutions, but also I would say Joe Solutions were contributing partners as well because they invested a lot of time in this pilot project. The next step is developing a global rollout plan for implementing of JS infrastructure. With this, I'm going to hand the floor to Luis Bermudez for technical part of this presentation. Thank you. Timur, thank you. Good morning and good afternoon, everybody. Should I share my screen or you move the slides, Timur? I will move the slides. Just tell me the next slide. Okay, very good. I'm presenting here the original technical implementation. You'll get the idea of what a hybrid system would look like. If you look at the upper left, there is a database and we had to figure out a way to export the data so it was easily ingested in post-gis. It was in RGSTE and we decided to use an open standard format like geo package. We developed a QTEs plugin that reads that geo package, another information which I will show in the next slides. The data gets ingested into post-gis. Ideally, we would like to have a staging and a dissemination environment, but we only had one environment. But I put these two because it's really important that when we have a system in operation, at least we have these two because we have no way to test the staging state that everything is okay and then you put it in production. Some of you may know geo node uses on the back geo server and post-gis. So that's why you see geo node geo server and then a connection to post-gis. So what we did after exporting to geo package and then updating the database, we synced geo node and geo server, meaning that the data was published as layers in the server and then geo node was informed that there was a layer that had to be published in geo node. We then tested that clients are able to exercise the data that was available in geo node. So that's basically it. We also had a task to be able to have a single sign on using Azure, but that is going to be done in the future. Okay, next please. So as I said, this is the process. We needed geo package, but we also needed the XML workspace definition. Why? Because we really wanted to model as much as possible close to what RKSD has provided, which is some of the tricky things like domain and types and subtypes. So we were able to, from that information, from the XML workspace definition, be able to export the data almost accurate with all the, let's say, intricacies that are done in Esri. Next. So we developed, as I said, a QG's plugin and we developed four main algorithms. One was an XML domain importer, an XML geo package, future classes importer, a geo server publish, and a geo node synchronizer. I want to explain a little bit those. So for the XML importer, we had to deal with subtypes and domains for those of you that don't know this example. For example, if you have a class or a future street, then you can have more detailed streets, like local streets and highway streets, and each of these can have their own properties. It can be different or it can be a list that only applies to that, let's say, subtype. So and the way that we deal with this is that we create local partitions and I'll put a link later so if those of you that are interested, you can check the code and you can check the blog that we wrote about this. But we actually, we created a table and then we created local partitions. So we're able to store the data very similar to what is stored in RKSD. Next. So the user publisher, the ESRI XML workspace definition, and you have to provide this information, you know what's the mall input, the rest as the geo-server rest address, because when you invoke the plugin, you are going to talk to geo-server to publish that layer. The workspace, data store, etc. Next. And if you are familiar with your node and your server environment, so the way to publish data in geo-node, if you're using geo-node, is that you directly use geo-node to publish the data. But if you have data in your server, there is a way to tell geo-node that a layer is available for geo-node. Even if it is, so the user that is behind geo-node can have layers that are not, that are not known by geo-node. That can happen. For example, when you install a geo-node on top of an already existing geo-server. So you need to tell your node that there are some layers that need to be available via geo-node. And for that, you can invoke the geo-node rest interface. So we developed also this plugin that provides information about geo-node, what is the rest endpoint, the authentication admin, and the data store workspace that you want to publish from geo-server. Next. The other tricky thing was exporting styles. Because as restyles first, they don't use open formats like SLD. So they have their own proprietary format. And usually you can find these as.layer or.layerX. And we instigated some of the best ways to do it. And we found that one of the ways is using geo-category extension that you can invoke directly from ArcMap. With that we were able to export in SLD and then make it available in geo-server. Next. Providing here the link to the QGIS code is called C198-CRIS because that was the number of the project. We're thinking to make this more clean code and documented more. But the way as it is, you are, because most of the things that we do is open source, you are welcome to download it, to use it, and enjoy the way to create this hybrid infrastructure which one of the parts is trying to see if you can export ArcSDE data into POSGIS and you know it. Next. I think that's it. I just want also to add that it's basically like normally the organization they build their geo-special infrastructure either on open source or on proprietary. So and it seems that for us for the UN it was very specific case. We had to figure out the way how to build our systems on both that would support because of the client base because some of the clients require open source, some of the clients require proprietary software. We also were grateful to geo-solutions, to Luiz, to Greece, Korean Research Institute for Human Settlements for supporting this pilot project. It was very successful and we are looking forward into moving to our next stage which is deploying it to the, on the production line. Thank you, Tim Moore and Luiz. That was great. Appreciate it. Very interesting initiative. Great to see open source used in this milieu. That's obviously got some clear advantages over commercial software and challenges with standards and cost. We have about four minutes, three or four minutes for questions and I've got a couple of questions in the room that I'll toss over to you. The first question is are you working with all UN agencies? I'm working sometimes with the UNHCR and almost all of their GIS solutions are based on Esri. Yeah, UN OpenJS initiative is open to all UN agencies, funds and programs to all UN departments and offices and also to academia, to NGOs, to private sector, to everyone. Anyone can join in and yes, so the answer is yes. It's most likely UNHCR will need to reach out to us and we'll see what we also can provide to them and how we can work together. So it's more of each group in the UN would want to opt in. It's not like using this stack is mandated across the UN. It's not mandated, it's voluntarily. For example, if UNHCR would like to move to open source, we will definitely assist them in this. Awesome. The second question is can you speak to participation either by UN or member countries in open source projects and community, thinking of long term sustainability here? So it's a good follow up question to the previous one it seems. Sorry, could you repeat the question? Sure. Can you speak to participation either by UN or member countries in open source projects and community, thinking of long term sustainability? So if I understand the question, is the UN actively participating in some of the projects, QGIS or Geo server or whatever the case may be? Yes, we do participate in UN OpenJS. Through UN OpenJS, we participate in various consortiums like for example, is Geo and also participating in various processes and also with GeoSolutions who is developing GeoServer. So that's I think that's our participation this year. Great. And one last question seems aimed at Luis perhaps. You seem to be using QGIS a lot. Have you considered using QGIS server? So we are the main developers of GeoServer and we know that it's very robust so we didn't explore other options. Great. Well, thank you guys again. You can see both Kimura and Luis have their emails in here. If you have any follow up or want their slides or anything, please feel free to hit them up. And thanks again for a great session. I'm going to take about five minutes to set up our next and last speaker of this morning.
|
The UN Open GIS Initiative is intending to provide a sustainable hybrid GIS platform (integrating open-source software GIS technology with the existing proprietary GIS platform) to effectively and efficiently support enhanced Situational Awareness and informed decision making to fulfill the core mandates of UN operations (e.g. Monitors ceasefire agreement & armed groups activities, Sustainable development, disaster risk reduction, etc.). During emergency operations, GIS and Image Intelligence significantly contribute to lifesaving operations, whether search and rescue or any other emergency operations. Having this ability, GIS has proven to minimize the cost of operations, assist in lifesaving activities, provide a common understanding of the situation through visual information of the areas of interest. UN has been utilizing geospatial technology over the past few decades and its GIS infrastructure has been built on mostly proprietary solutions. For the past years, hybrid and open-source technology have grown and matured beyond what just proprietary solutions can provide. Continuing provision of GIS services only on proprietary solution brings considerable challenges, such as limited flexibility, restrictions in data formats, high cost of licenses, limited options for scalability and mainstream, and difficulty to transfer capacity & technology to the Member States (host nations) and working partners. Where hybrid and open-source complement to effectively support UN operational and technical demands, it is complementing UN legacy GIS infrastructure, it minimizes the cost of licenses, which would optimize the cost of running and maintaining of GIS infrastructure. The hybrid and open source technology provides flexibility and streamlining of GIS process, scalability due to cost efficiency, interoperability, innovations, and has a lighter footprint on the infrastructure. Hybrid GIS architecture combines the necessary components and technical demand from both proprietary and open-source solutions through the integration of a geospatial database between a proprietary and open-source that complements both platforms to support every UN requirement. The Hybrid GIS Infrastructure pilot project focused on proofing the concept through the design and implementation of a hybrid prototype to support (1) Unite Map and (2) Open GeoPortal. This talk will share the experience of integration of proprietary and open-source GIS Infrastructure. Background: the UN Open GIS Initiative, established in 2016, is an ongoing partnership initiative and supported by the UN Member States, International Organizations including UN Agencies, Academia, NGOs, and the Private Sector. UN Open GIS aims to create an extended spatial data infrastructure by utilizing open source GIS solutions that meet the United Nations' operational requirements (UN Secretariat including UN field missions and Regional commissions) and then expands to UN agencies, UN operating partners, and developing countries. Authors and Affiliations – Timur Obukhov (1) Gakumin Kato (1) Diego Gonzalez Ferreiro (2) Zeeshan Khan (2) Luis E. Bermudez (3) HaeKyong Kang (4) (1) Geospatial Information Section, Office of Information and Communications Technology, United Nations (2) Client Solutions Delivery Section, Service for Geospatial, Information and Telecommunications Technologies, United Nations Global Service Centre (3) GeoSolutions USA (4) Korea Research Institute for Human Settlements (KRIHS) Track – Use cases & applications Topic – Software/Project development Level – 1 - Principiants. No required specific knowledge is needed. Language of the Presentation – English
|
10.5446/57295 (DOI)
|
Hello, everyone. Good morning, good afternoon. We are going to start with a new session in the very larger room. We have very, very interesting talks today. We are going to start with Ramiro Asnar. He's a geospatial data engineering at Planet. And he's going to talk about when geometry meets geography. Hello, Ramiro, welcome. Hello. Hello, everyone. Okay, I leave you to your talk. Okay, yeah. Thanks, Alba, for the presentation. Yeah, so, hello, Chabalada, how you are all having a really great conference. This is a talk that I have already given several times this year. And it's about a book that I'm currently writing. It tells the story of some of the geospatial applications that we are familiar with, GPS, geocoding, routing, web mapping, GIS, and the tech and limitation that are behind these technologies. It is half written, so there is a lot of research and writing to be made, even interviews as well. But before we start, let me switch to Spanish just for a second. Because I think that the things in the heart, I have to say them with your native language, I did not know Malena, only a few days in the Tanzania's Fosforgy, I thought I would see her again in person in Buenos Aires when this conference is celebrated. But well, I have very few memories of her and the same thing happens with many of the people who have left us there. And to not forget them, we must remember them. We must tell the stories we know of them, the people who met them. And well, that's how the memories will be fixed and our other people with whom we speak will extend Malena's life and the other people who have left us. Well, back to the talk. When geometry meets geography. So normally when we build, when we develop geospatial applications, our ideas, our code, our apps look like this, right? So geometry is clean, sharp, polished. But reality is more like this guy over here. Geography is messy, dirty and fitting. And yeah, in this book, I'm going to give some examples of this confluence, right? Of these meeting points. So the first chapter of this talk, not the first chapter of the book, is going to tell you, I'm going to tell you about the border. So during my time in Carto working in support, we usually got this type of image, right? So your map is wrong. So yeah, some of my viewers contacted us telling, telling support that Carto based maps were wrong. So the bond that is that they were referring to were located mainly in disputed territories such as Nagorno-Karabakh, Western Sahara, Kashmir. So Carto based maps were built as many, many others on top of OpenStreetMap. So we were using OpenStreetMap data. Well, there are some interesting tasks for to work with this idea. We didn't implement any of them. We were using kind of the default. And also these tasks were not being, as far as I know, that they are not like accepted by the whole community. Other based map providers use different approaches, different strategies. To be honest, I have no idea how Google does it behind the scenes, but you can see here an example that if you are in Morocco, you don't see the separation between the Western Sahara and Morocco. But if you are living in Morocco, then that's it. Like we'll be showing in your map. So it's shows you the layers based on your location. And some layers that are available in SRE servers and so on, they have also this, yeah, this symbology of the, you know, case four or solid four, depending on who is the viewer or where is the viewer. In Mapbox, there are only four specific options in the library plus the default. And that's, so they use like point of views, right? And this point of view can be the default or China, India, Japan and the US. And I will bet that Mapbox clients are based on these regions. And as you can see, it is kind of easy to implement, right? It's almost one line of code. And in this example, in this give, you can see like how the border is moving, depending on the Indian point of view or China's point of view. So this that looks like a game in our screens in our code in reality in the field looks like looks like this. So this is, I think this is a picture from the last conflict in Nagorno-Karabakh. So there are wars, migrations, isolations, there are people dying. So we should be aware of these lines and the geography that they represent. Okay, so the second chapter, it's roofing, right? And I will expect that many of you have done the typical workflow that is creating a buffer and then intersecting to get the points. And that's the typical error, let's say. And I have to confess that I have done it several times in my career. So let's take a look at another application. Imagine that you are an insurance company and you are covering the damage of an explosion. So your tool, the application that you run to figure out which clients are covered and which clients are not covered. The application uses a buffer, a one kilometer buffer from the explosion. And you got a complaint from a customer that is telling you that he is already within this one kilometer buffer, but the application is telling you otherwise. So who is telling the truth? So this idea came from a really nice talk from Sakavi, the CL. I really recommend taking a look. There is a link here in the presentation that will point you to the talk. So the idea is that many of the geospatial or all of geospatial applications build buffers simplifying the geometry. The issue is that a circle is an infinite number of points at the same distance of a center, right? And that cannot be modeled or visualized in a geospatial application. Of course, you can do this with things. For instance, using Pozzi IS, but if you want to visualize a circle, you need points to visualize the geometry. So that was the issue with this client, right? So because the application was simplifying the geometry, that point, that green dot in the left side of the left corner in the screen was left out. It was outside the borders of the circle. Another issue or another problem that we find when working with buffers is we have experienced this during the last lockdown. So I don't know if you remember, but many governments allow people to work a certain distance from their homes. In the case of Spain, it was up to one kilometer. So many applications such as this one that was made by the people from Geomatikos, it showed you the distance buffer from your place. But what happens when you have a barrier, right? Such as a river, a highway, and you know, the closer park from your place or the closer hospital or pharmacy is within the borders of this buffer. But in fact, if we use an isochrome, so instead of using just distance, there will be other places that are closer to your home, because in this particular case, you will need to cross the river. This is Valencia, and it's a longer work than the other places. So the third chapter is about one of my favorite topics that is geocoding. So this is the address of a hostel that I went like, I don't know, 10 years ago or so to Costa Rica. And for us, Europeans, it's kind of weird because we don't know if the hostel is in Avenue 6 or is it in Street 21 or Street 25. And what about the second line? This is the La Iglesia Sagrada Operazón 100 metros norte, 50 oeste. So it's telling us like a relative position. So it's from one chart, 100 meters north, 50 meters west. So for us, it's kind of a strange. So if we look at the map, this is the Street 25, this is the Street 26. This is Avenue 6. So it's kind of framing us where the hostel is. And also this is the chart that we were, that the second line was saying. So it tells us that we need to go 100 meters and then 50 meters in north and west. So probably this is the place that we are looking for. And in fact, it is. But it has to show you that the way that the post that goes work is different from one place to another. And yeah, so I used the example of Costa Rica, but of course the Japanese way of structuring street addresses is quite different. So it's using blocks instead of street names and what is striking and it's kind of surprising to people who don't live in Japan or in Tokyo is that the numbers of the blocks are being set up based on the age of construction. So you really need someone local to the community or a special application to move around a Japanese city. So these are just two examples to show you the complexity of other postal codes in the world. But even our own ones, we assume too many things. And this is a link that shows you kind of a list of things that we think it's real in when we write, let's say more occidental or more European or US centric kind of postal codes. And there are many, many, many, many issues concerning these assumptions. Yeah. So next time that you type something in a text box that it's going to hit your code in API, think about these things because it's kind of amazing how they are able to translate these expectations. They know where our home is, right, where we work as well. So it's something that is really, really interesting and I love doing the research that I've been doing in the book. Yeah, so the final chapter, it's kind of a bonus. Yeah, I would like to end this talk giving you some ideas to think about afterwards. So there is a lot of debate about how social media, like YouTube recommends new content to users, but I haven't seen any similar approach in your space application. Maybe I'm wrong and I'm missing something, right? So I can imagine something like the Explorer of Planet, one of the apps that my company is currently developing, that you are exploring a part of the world and you get like an image. So the application can recommend you, okay, if you have seen this hardware city that is in the north of Spain, maybe you are interested in another Spanish hardware city or another north European city or another city that is really close that it has also a hardware. You know, so this is just an idea. And lastly, I have been playing with OpenAI GPT-3 model, so you can ask them many, many, many things and it's kind of amazing how the conversation or how the response develops. And you can ask the model to, you can ask the model some geospatial questions and it's quite funny. But what is so surprising is that most of the time it's accurate. And you are, if you ask about the distance between cities or how many cities are within the range of, you know, one kilometer from a specific place, because it has a connection with Wikipedia, I guess. I mean, it has been trained with all these URLs, all this data that is internet. Wikipedia should be there as well, I guess. That's really cool. So yeah, it would be great if we can have something like more specific for geospatial. Yeah, that's something that it would be great to see. Okay, so yeah, that's it. Yeah, many thanks. Yeah, if you have any questions, yeah, let me know. Thank you, Ramiro. That was a very, very nice presentation. I really liked it. All the concepts that you presented, sometimes we forget about them, so they are very important, especially not forgetting that we are mapping territories that are inhabited by people and societies that influence them and we had to take them into account. So I think that was a great presentation. There were some comments in the chat for you. A lot of people is relating to these problems about different addresses, different codes, postal codes. So I think a lot of people was also interesting in all of these concepts. Please post your questions in the question pool or comments. Here a lot of people is saying thanks, great reminder of key concepts and also wishing you luck with the research and the drafting. So very nice comments in the venue list for you. Okay, while I wait for more questions. No, for now, I will give the link to in the, I will post the link of the templates so you can all have them. If you wish, I will, if you want, I will invite you to give this talk in the university I teach because I think students will love it. So we can keep in touch because I really like it. So congratulations for that. Okay, I don't see. Okay, I have a question here. When do you expect your book to release? Yeah, hopefully, probably the end of next year. Because as I said, it's half written. So, and it's been one year since I started reading the issues that I'm not, I have to work as well. So yeah, it's too much. But I, yeah, I've been having a great time and so maybe I will spend like one month, I don't know, one month, one week of my vacation just focusing on that and maybe I can speed up. But yeah, I will say it like next year, probably the last quarter of this year of 2022. Okay, we are looking forward for that. So there are some other questions. Have you found any reports of issues caused by changing in DATOMS over time, locations moving? No, to be honest, I have kind of skipped the projection issue because it's like a, yeah, it's a big Pandora box and there are people who are more knowledge about projections and photography. But yeah, I'm going to make a notation because that's a good idea. Okay, great. Are you going to make any maps for your book that you are excited about? That's a really good question. Yeah, because it's kind of a moreative, creative kind of book because I also tell the story of how these technologies have been built over the years. Yeah, there will be maps but easy ones, more schematics than, yeah. Okay. Well, someone is asking about boundaries in Mauritius domain. So in the sea, but he missed the, he or she missed the first part of the presentation. So I posted the link of the slides already. But if you want to post also your contact details, I can share in the venue list, your email. Yeah, regarding the boundaries, the sea boundaries, no, I haven't because the chapter about the territories, the disputed territories is not still written. But that's something that I will need to investigate as well because it's, especially now with the Brexit and so on. It's interesting. Okay, great. If there isn't any more questions, I think I, yep, I tell them all. You can also then go to the venue list to see all the comments. There are very interesting comments in the chat and some feedback. So it was very nice. Okay, thank you very much, Ramiro. We can say goodbye. Yeah, let's keep in touch. Yeah, yes, I will write to you.
|
Geometry is clean, easy and polished. Geography is messy, dirty and unfitting. When we -geospatial developers and technicians- built applications to solve real life issues usually rely on the first. But eventually, we have to deal with the second. This talk is a very short sneak peak of a book I am writing to explain the most interesting cases I have experienced when working in the industry. Can a buffer tell me if I can take a walk to the nearest park? How a disputed territory such as Nagorno-Karabakh is displayed in a webmap? How geocoders understand the wide diversity of national postal systems? Geometry is clean, easy and polished. Geography is messy, dirty and unfitting. When we -geospatial developers and technicians- built applications to solve real life issues usually rely on the first. But eventually, we have to deal with the second. This talk is a very short sneak peak of a book I am writing to explain the most interesting cases I have experienced when working in the industry. Authors and Affiliations – Ramiro Aznar Geospatial Data Engineer at Planet Track – Transition to FOSS4G Topic – Data visualization: spatial analysis, manipulation and visualization Level – 1 - Principiants. No required specific knowledge is needed. Language of the Presentation – English
|
10.5446/57296 (DOI)
|
Okay. So hello, everybody. On behalf of Women in Geospatial, I'm welcoming everybody to this networking session that is going to start with a panel. We have four panelists. Wonderful ladies. We have Maria Broselli from the Politecnico Milano. We have Miriam from UP42, if I'm not mistaken. We have also Natalie Sidibe from OSM Mali, and we also have Daniel Boronin from the University of Albao in Huntsville, and Nasa Impact. So without further ado, I'm just going to tell you a little bit about this session itself. We're going to have this panel session for one hour right now, and then we're going to head over to Work Adventure for a more interactive chat. And basically, we're going to have some interaction with the you ladies, the ones that want to participate, and then we are going to have some free chat as well, going from our session on Monday, which was also about networking and careers. So yes, without further ado, I'm going to introduce my speakers, and I'm just going to ask them to introduce themselves just as a heads up of their career. So let's start with talk wise. Let's start with Maria. Okay, so thank you, Christina. Thank you for inviting me. Thank you for your presentation. I'm Maria Brovenli. I'm from Italy, specifically from Como, which is a city close to Milan, and I'm professor of Geographic Information Systems. With respect to my career, I studied before physics and specifically environmental physics, and then during my thesis, I had the opportunity of working with people here at engineering, where I'm now working, and I started working on geodesy, as a matter of fact. Then at a certain point, I was told that the only opportunity for a position here was to start teaching something absolutely new that was GIS. This was in 1992, many years ago, because I'm old. And I say, why not? I will start studying this GIS, and so I started teaching GIS, and then since then, I always worked on GIS. Okay, this is the fourth of my life. We're going to extend on that anyway. I'm just going to let Maria introduce herself as well. Okay, perfect. Should I do Maria, or just a short and then we can go to the questions? I got a bit confused, Cristina. Just a short introduction of yourself, of what you're doing, and then we're going to go into Geospatial and Open Geospatialism. Yeah, awesome. Thank you. My name is Miriam Gonzalez. Right now, I'm doing partnerships for O42. It's a startup in Geospatial. I'm volunteer in different organizations. I am part of the board in the humanitarian open street map, and also I am an Air Observation Evangelist for the FIRE Initiative. That's something for supporting the Green European Deal and having Air Observations supporting all the goals. And also I'm co-founder of Yochitas. So I'm happy to be here with you guys. I'm happy that you honored our invitation as well. I'm going to let Danielle also introduce herself. Hi, everyone. My name is Danielle Grunin, and I currently live in Huntsville, Alabama in the United States. I'm a research scientist at the University of Alabama in Huntsville working on a NASA impact project. My background is also in physics, like Maria. I chose to then go pursue atmospheric science meteorology as my master's, my PhD. In between that, I was a high school physics teacher for three years. So I'm really into science communication and outreach, really, because of that. Right now, I'm working on really NASA Earth data projects. So I'm working more as a data scientist now and working on data stewardship and data curation for projects like involving airborne and field campaign data for NASA, and then a joint ESA NASA project called MAP, which is related to biomass mapping. Thank you. Very happy to be here, too. Very happy to have you here as well. And that's a very – we're starting to have a very diverse background here in the panel. Next year, from Natalie as well, and then we're going to go to learn more about your geospatial career. Okay. Thank you, Christina. And hi, everyone. I'm Natalie Sidiway from Mali. I'm based in Bamako, the capital city of Mali. I got involved in OpenStreetMap, and I'm a research scientist at the University of Mali. I'm a research scientist at the University of Mali, and I'm a research scientist at the University of Mali. At least there is one part of them in the Mali scientist portfolio, due to the role of one of the top müsste Savannah 就是导济大国馈b at the 19th century. And this is the one currently working in testing. that you have all diverse backgrounds. And I think our audience would like to know a little bit more what got you into geospatial, because Maria, you already touched some of those points, but many of us started from a different background or got into geospatial by chance or by luck or anything. So we would like to know if you have had a good career in geospatial, how did you get there? I'm a human resource manager. Yeah, I studied human resource management here. And I learned about open street math in 2014 in Mali. So I've been training. And then I decided to be involved in open street math. And to do all the open street math, all the promoting geospatial data in Mali. So it's, yeah, I'm here by chance, I can say it, but I'm very lucky because since I get involved in this field, all my life have been changed. So I learned a lot in open street math and all my life changed because of open street math. So all I have now is because of open street math. Yeah. That's a... And I continue to learn because I know what I'm, what impact I'm doing in my country and also in West Africa. So I'm very lucky. I'm very lucky. That's a very fortunate... I'm fortunate to start with you. You like just learning and you provide the impact you can be in the field. That's something that I just learned. I'm trying to extend on that because you also know who's speaking and you've taught all kinds of things. Sorry, can you come on again, please? I'm sorry because I'm French. I'm French-P. I'm Franco-Fern if I can speak. So English is not my question language. So I'm doing my best to understand, to talk with you. So sometimes I can not understand what you mean. That's not the problem. We're very happy to have you and what your English level is. So the main point is to learn from your experience and make your experience about what drives you to the Netherlands for severe illness. And considering that when we both work with over 100 families, I would like to do a very advanced, very experienced... I can't understand what you were saying, sorry. I just wanted to give the floor to you just to expand on what's your work, what started you in Geospatial as well. Now I'm muted, sorry. So okay, now it's my turn. So it's a funny story I would say because I didn't study anything related to air observation or mapping, but I always like maps. I remember when I was collecting maps, when I was like 10 years old. And also I used to love to watch a car seagun giving all these amazing, cosmos TV shows and listening to him in Mexican Spanish. He was fantastic. So I think, I mean, that's how I started loving all science. I mean, life took me to international business because also I think around me there were kind of, not many opportunities in my hometown. I mean, I didn't see, I didn't speak with people in this major, so I decided to study international business. And then I was in different industries. And then I was taking one year off living in Beijing, China because I wanted to quit everything and be kind of fanciest about the year. So I came back to Mexico after, I mean, I ran off money because I was living with my savings. I have one joint in my pocket and then one friend asked me, what's next? And I was saying, I had to look for a job. I have no money, I'm broke. And then I went to start working soon. And by chance, this friend, she was working in this new startup from Silicon Valley. They were looking for someone in Latin America for doing business with a GPA application in the phones. Can you imagine that time? It was kind of like the early days of GPAs. No ways, no Google Maps, no nothing. I mean, OpenStream Maps have no idea of that. So I started working with this company. And then I started getting to know about satellites, about the GPS, how it works. And then two years later, about open mapping. And then I discovered OpenStream Maps. And I was like, what? Why people use mapping? And then there are already maps. Like the question I received in so many workshops I gave all around. So that's how I kind of started the journey in just special. I mean, by chance. And I'm so happy to be in this journey for already 11 years. And I don't see myself anywhere else. I mean, I really have to continue learning. And with the use of the computer and the things that I need to talk about regarding color, regarding disaster management, regarding, I mean, how can we support challenges on Earth, I mean, with algorithms and their observations. So many things I think we can do with just special. And while we are doing it in every single field we are targeting now. So even I'm not so technical. I'm more in the partnership side. I am trying to put together companies doing valuable data that can work with algorithms and that we can solve challenges together. That's kind of my role at this moment. Regarding what I'm doing in the OpenStream map is kind of more like supporting how the organization can achieve the mission of mapping 1 billion people in the next five years. And I will be speaking about that in a talk tomorrow at 1 also in one of the near rooms. I would like to add something very important. When I learned about OpenStream map in 2014 in Mali, my country was facing a lot of crazy, security crisis in the North. So there was no data, no OpenStream map data. So we learned to produce data. So we understand the data we will be making can help a lot of UFA, NGO and also people who make decision. And we understand that whole data will be very, very helpful for our country to address many crisis. So that's why we are here. We are here and we are very, very happy and proud to contribute to the development of our country. Thank you. Indeed, data is very important. And you both had the tremendous journey on going into geospatial and especially when it comes to getting in the early stages of geospatial, maybe we can hear about that a little bit more from Maria and because she already gave us a little bit of a flavor of that. Okay, so speaking about data, yes, I believe this is something that I was interested since the beginning as a matter of fact, but I want to add something to what Natalia was saying because in principle we are speaking about maps. And if you think about maps, the maps were before made for military purposes. That was the main purpose of making maps. And what is very relevant is that on the opposite, people now have the maps. And this is the big point in my opinion, the big difference with respect to the past because having the map means controlling the territory, means monitoring the territory. And therefore it is very important what Natalia was saying that they realized that they needed the data of their country. And I believe that a solution like open street map or open data is the best solution for every of us. And so this is my position with respect to data, but I want to add something more because we have to do the same also with respect to software as a matter of fact. So there is no reason of keeping secret the software we are developing because there are so many problems in the world. We have so many challenges. Why to repeat different people the same procedure, developing the same piece of code when on the opposite we can collaborate for creating something where we can live better all together. And so now I leave the word to somebody else who wanted to go ahead with this point. I'm going to go to expanding when what you're saying about data and everything else. I'm going to go to Daniel because she's doing some outreach or she has done some outreach. And I think her experience in geospatial is also interesting. Thank you, Christina. I like the visualization of the data. I think that's so important, especially in communication and outreach, especially with climate change, it's a very complex thing to understand. And environmental science is a very complex thing to understand. But I think these geospatial maps really help people better understand and visualize. And it's basically science storytelling. What I really like to say is you're telling a story through this visualization, through this map, or through this, and really helping people understand the science better. Exactly. And there are many, many touching on the open data and open software and so on. We have here a panel of women and women in geospatial. So we obviously want to change the status quo and go from manals to women-only panels or women-balanced panels and so on. So my next question is basically based on some statistics that we got from the Phosphor G registration. Only 20% of the registered attendees of Phosphor G have answered that they are women. That's a very worrisome statistics. Now, I have to add that this is based on whoever answered, but scaled the number of attendees, this is still worrisome. And we've seen this as a statistic over geospatial in general. So I wanted to know what got you, well, what got you interested in open data, it's obvious by now, but what do you think about this phenomenon of women not going enough into open geospatial? Let's get with Maria. Yes, I understand Christina, you're frustration, but please be positive, because when I started working on geospatial information, I was the only woman there. So now it is 20%, it's a lot, it means that we did, we have been doing something. So it is from the cultural point of view. The science and technology were not considered something for women, as women, we were supposed to do something else. I don't mean only giving birth to children, but doing something else, so literature and all these talks, music and so on. So this is being so involved in science and technology, like the 20%, I know that is not enough, but it was a big change in few years. And I see the number of my female students increasing year by year. So don't be pessimistic, because I believe that in few years, probably we will be the alpha, if not more than the alpha, because women are very good in science and technology, in my opinion. So now it's not about 20%. Well, we hope it's not 20%, it's just the statistics we got for Fosferty, obviously. But we touched this topic on Monday, and we've seen a lot of women dropping from geospatial in general, and women dropping from open geospatial mostly. So I wanted to learn from Miriam or Natalie, if you had any of these experiences while working with an open geospatial. Thank you, Christina. I have one question. This might be 20% is about the people who didn't refer to their pronouns, or of the people who actually sign up for and registered to Fosferty. So indeed, it's not 20% overall statistic. So it's just the people that answered about their pronouns, but even so, in my talks with Maria, and so on, we just realized that they're very, the percentage of male to female, it's not very balanced in Fosferty itself. For me, it would be also interesting to see, I mean, if you have the normal questions, female, male, and not answer or non-binary, I think that's kind of the other type of question that I received a lot. It would be interesting to see how many people they answered this type of question, and how many people still they are not familiar about the pronouns. Because I feel that there is many people, including in me, took me a while to understand how it works, the pronoun part. So I wonder is, because the lack of understanding about how the pronouns work, or they are not familiar with this, or they are used to this kind of question. So just leaving that in there, because I also don't want to be pessimistic, like Maria, I see, I think we need to see the light at the end of the tunnel. And I hope there are more than 20%, at least in my company, I mean, I was able to get the support from the management, and we are 18 people watching Fosferty. I don't know how many, we are women, but maybe, I mean, I would say maybe 35%, we are women. So I'm happy to share that with you guys. So from my side, I also read with Maria regarding, when I start giving talks, when I start getting really involved and fell in love with OpenStreetMap, and I realized, the first time I saw the data from my home country, Mexico, and Latin America, comparing to Europe and other places, and I was like, wait, these guys in Germany, they are already mapping the trash bins. And we don't have streets in Mexico? Come on guys, this is not possible. So that was kind of a book I was making in every single word that I was giving, because it was true. I mean, you see Mexico with lack of data, not even streets in medium-sized cities or like important towns, and then you were having all these things already here in Europe, already mapped. So that was back in 2014, at the time I was also not really enjoying OpenStreetMap, it happens also to me, this kind of discovery. So at that time, I started giving workshops and also with some other people in Mexico, Latin America, and then I saw the balance, as Maria was saying, that it was really hard to see in the conferences, you already had to see in the workshops, female presence. So we start thinking about how to promote the promoting, and that's how also, also, your HIKA started at a certain point. But I think now when I go to conferences, and I see also, I mean, there is think, of course, I mean, not a balance that we want to see, but at the same time, I see how things are moving. Of course, not in the time and the pace, we would like to see them. I mean, we would like to see them faster, because I mean, the ones that we are involved, as Maria Brobelli, as Maria Arias, you, Cristina, you, Alina, you, Natali, I mean, we make a lot of noise. But there is still a lot to be done, to be able to bring and keep more women interested in the matter, more interested in what facilities they have for their careers, and for some other things within your special. So I think we still need to be a lot of, so people sometimes, they don't know how to approach, how, the special world, they don't know how to do certain things. So I think, as long as we continue what we are doing right now, regarding mentorship, regarding having maybe one, two, one with people, saying, hey, I mean, what do you like? I mean, how do you see yourself in maybe three, five years? Why you don't submit this talk? I have been supporting so many women regarding checking abstracts or sending links about events that they can participate because the first conference, it will be like, oof, I'm not capable, I don't have nothing to speak about. And then it becomes an addiction. You want to keep speaking about what you're passionate about in one, in two, in five conferences and more. So I think when we switch this to more and more women, I mean, then we spread the word and then we're going to be continuing to see more, more presence in open, in your special, in your special and in general. So I think it's an ongoing process. I mean, and of course, I mean, things are changing, not, as I mentioned, I mean, not how we would like to see them in the short term, but we cannot, I mean, keep us at quiet or in silence. I mean, this is something I mean, we should keep working together. Exactly, empowering women is something that is really important and giving them, we have so many programs of mentorship right now and it's really important to make this noise about them and let people know that they are there, they can access them, they can go and ask for help if they need it. It's also important to not only just mentor as a technical or at skill level mentorship, but it's also important to have at least a chat with somebody and sometimes it's really, really useful to unload. And I'm looking at the discussion that is going on, VanuLess as well. And we have some comments about women being burned out and this being one of the reasons why they are living STEM and why this imbalance between us and our male counterparts happens. And sometimes it's really good to at least have somebody to unload or somebody to encourage you in the process. I would like to learn a little bit from Daniel because I know that you, since you've been working on this outreach phase, what have you, what's your experience, what have you seen? So I am of the opinion that 20% is way too low. I think that we definitely need more mentorship. We also need to see a lot more female role models. We need to see a lot more professors at the university level, women. And we need to see a lot more female teachers from K through 12 in the US system. I think it's really important that women are out there speaking about science on TV, on shows just encouraging women to pursue a STEM field. I think it's really important to have a community and network. And especially at the university level, I've read about some studies that women drop out of PhD programs or they don't pursue academia because they maybe had a male professor or PhD advisor and they just see how the system is just majority is male. So I see a lot, I see one of my goals as in communication and outreach is to say, hey, you can do any kind of science you want, any kind of STEM field. So that's why I like doing a lot of the community outreach so women can see that. Cause I've had female role models that I see on TV or I have read about. So that encourages me. So I hope to encourage and inspire other women. I really, I would like to add something to what you just said. There was a recent documentary that I encourage everybody to follow. It's called Picture Scientist. And it's about the different stories of, I think for women, if I remember well, about the way that they entered the field of geospatial at different levels and how they have navigated this journey. What was their experience and how the fight for us to get to a balanced level started. It's really important, as you said, to have mentors, to have people stepping up and trying to help the others and also have role models, people that we can look at and learn from them. And sometimes it's even good to see them as I want to be there. And it's nice to have more women in this position. So I'll let Natalie also give us her thoughts about how it is to go into open geospatial and how it is to be a woman in this field and what prevents more participation. Yeah, first I would like to say that I do agree with Maria and Miriam, so let's be optimistic because we are doing a lot of things to get more women in this field. In Mali and in West Africa in particular, it's difficult to be a woman in geospatial or even a woman in other's life. So we have a lot of challenges here because we are facing religious, culture, belief which can, which are obstacles for women to progress, to be a leader and to be involved in public life. But we are doing our best and also the government is doing his best because we are currently a minister who are promoting women, women empowerment. So in our level in OpenSafeMapField, it's difficult for me and but I got every time support from everyone whenever I'm facing difficulties, whenever I'm facing technical difficulties or yeah. Anything, so it's not easy but yeah, I'm here and I'm very decided to go ahead. And also to get a lot of women involved in geospatial. We, three months ago we initiated a program to train women, to learn, to train them to map their own challenges. I think this is a best way to get involved more women, to learn them to map their own challenges. So that's what we are doing currently. Yeah, and I think this can be also, yeah, we can try to do same thing in other countries, yeah, to learn women to map their own challenges. Thank you Natalie. Based on what we discussed so far, I see that we have a question in the chat regarding what was the most important thing, mentor, opportunity, meeting somebody, jumping to a job that helped your career in geospatial. Just if we can single out one little thing. Okay, can you come up? Oh, no, I got to say. Please, can you come up again please, Tina? If you can single out one little thing that helped your career in geospatial. A mentor or opportunity, meeting somebody. Yeah, I think, I think we can do that. I think we can do that. I think we can do that. I think that's fine. Yeah, I think the mentorship, because we, yeah, I've been trained by what we call the, opens with map front of phone. This is a program who trained us and who who were here to turn also, opens with map community in West Africa. So we, yeah, so I was personally be supported by this program, yeah, since the beginning till now. So I learned a lot of things from this program. Yeah. So I can say it is a mentor, mentorship is the best way. We're going to take mentorship. I don't know if this is the best answer of your question, but. It is, it is, and we're going to take this as a word for this session. I'm going to let Maria also tell us what's her. Yes, I tried to be very short anyway. The point is that, as I told you, I started very early with the Geospatial Open Geospatial Information. And in 2000, I organized the first conference in Italy, not about open source GIS, because at that time there was only grass. So it was the first user conference of grass in Italy. And then we have this first meeting attended by 50 people, around 50 people. And I was surprised because I don't felt any more alone. I said, oh, other people in Italy who are interested in grass. And that's wonderful. And so we decided to organize in 2002 what we called the first international meeting of grass. Then we discovered that it was not the first really. There was another one also in US, but we simply didn't know because the connection at that time were not as they are today. And so we organized this first conference, international conference. And I remember that it was attended by people like Venka, Venkatesh Nagavan, and Marcus Neteler, and Elena Mitashova. So for the first time, I saw these people who were not Italian, and they were working with grass. So that was surprising. We are not alone. Oh, come on. There are other people like me around. And this was very positively surprising and giving to me, but also to the other Italian people a lot of energy, like saying, come on, we can go ahead because we can build something from that. We are not alone. It's something that is spread all over the world. So we are very, very, very strong. And we must go ahead. This was the beginning. This was the beginning and was probably the most important event with respect to geospatial information in my life. That's a wonderful experience. We also had on Monday Veroni Candreo. She also talked about her experience with grass and how she got into geospatial. Was it really kind of the same? And the interesting fact is that you both pointed something. We are not alone and we are strong together. So that's something that we're going to take as a second or on par with mentorship from this session. I'm going to let Daniel as well to see which is her. Yeah, I actually got my current job from a mentorship or an informal friendship kind of thing. About two years ago, I presented at a science team meeting and my current boss was so impressed with what I did and how I presented and everything like how quick I was thinking on my feet, that kind of thing that about a year and a half later, she offered me a job. So that just goes to show that I think it's important to volunteer to go speak at conferences or put yourself out there because it's just all part of that networking and you just start meeting other women and clicking with them and sharing experiences. And in that case, with my boss, we just clicked over, you know, being women in a male dominated field. So that was really important and just led to my current job. Yeah, I would like to say that networking is definitely something beneficial and I encourage every single woman out there to go to try to push themselves a little bit and get out of the comfort zone and try to network a little bit more, get involved. First steps are never easy for anybody. Nobody's perfect. So it's important to not be afraid to make mistakes and put yourself out there. I'm going to let Miriam as well. Yeah, thank you. So one point, I think I have different points that also make me become more and more passionate. But I mean, two that I can remember right now is when I realized, I mean, when I was mentioning before that Mexico was suffering data, I was thinking, as Maria said, I mean, I feel so lonely. I mean, now what? So how can I organize with more people interested in the topic? I went to this hack space in Mexico City, Rancho Electronico, and because they have some mapping gatherings every week. Then I went there and then they were doing some mapping workshops and then we started speaking about how we can do things together. And then suddenly, I mean, we start giving workshops for universities and also even for governments that they didn't know anything about open data or open street maps. So that was super interesting. And then I remember, now in Corona times, it feels like crazy, right? We did this workshop in the one we had 88 students. So we were like, I mean, how come there are so many people participating in this workshop? So the thing is, we were like, I mean, I tell you, more than happy about participating, about doing all these potential workshops and also showing applications and how we were doing things. So that was also for me, like, we need to do more work. We need to keep spreading the work, like kind of a evangelization of these open data solutions. And then also one thing that also, again, it gives you that feeling about what else, I mean, can be done, set of the map. The set of the map, the first one I attended and also I gave a conference was the one in New York City in the United Nations. And that felt good. That felt so good. So being there in the one, I mean, you have seen so many times in movies or in actual UN conferences and being there, speaking about data and about maps. So that was also really, really like an experience that I can never forget. Indeed, getting involved in all these initiatives, all these even going to your local group. And I don't know, just doing something with them. It's really beneficial. And I'm looking at the questions and actually we have two questions related to this. How do you get involved in the geospatial sector as a career? A progression route, a graduate scheme, apprenticeship? I think you already answered some of these questions. Just go there and get involved into your group. Or like, I don't know, pat somebody on the back and say, oh, I want to get involved in this when you network. But I would like to learn from Maria because I know that you have very, very limited time right now. So Maria will have to leave us in about four minutes. I want to hear from you. How would you get involved your students in geospatial more than just the university course? And what key personality traits they should have to be successful? OK, yes. Thank you. Thank you also and sorry for, because I have to leave, because I have classes. I have a class at 2.15 a.m. to run to the classroom because it's not close to my office. And yeah, with respect to students, first of all, I have to tell you that I'm a very severe professor. So I wanted them to study a lot because it's something, yeah, it's not so. Studying is something important. And we must study because we must be better than the others. This is my starting point. But besides that, it's really important also to transmit them the importance of what they are doing. What is the reason why they are doing that? What is the reason for becoming good map makers, a good GIS expert? And so generally what I tend to do is to propose them some projects which have some social or environmental content because I believe that science is science and tech. It's really science and tech when it is science and tech for good. I'm not interested in the other science and tech as a matter of fact. And this is what I want to transmit also to my students. And then the other point in my opinion, which is very relevant, is to teach the students how to collaborate together because I believe that all of us, we have different abilities. And if we want to solve the problems that we are facing, we have to do together. So I really believe that leave no one and no place behind. It is not just a set of words, but it's something that has to guide our life also when we are dealing with geospatial information. So this is what I'm trying to teach to my students, which is beyond obviously the geospatial information, but I believe that is important all the same. And I like when I see that they feel the same like me with respect to that. Then I accept those students because all of us, we are entrepreneurs, obviously, but I see that my favorite is for these people feeling this way. So I don't know if you, I answer completely your question because it's very, it's a very complicated and long one, but at least they gave us starting point of view. Thank you, Maria. I think that's something that answers the question and whoever is attending this session is going to get a good answer on how to proceed in geospatial. I'm thanking you for being with us today. I know you have class, so I'm going to continue to build the panel with the other ladies. Thank you so much. Ciao. So we're going to continue the panel and I'm going to go over to Daniel to actually expand on what Maria said. Yeah, I think that it's important for people to be proactive in setting up their career. So some of my suggestions would be to get involved or to create your own website, create your own Twitter account related to geospatial stuff. Start networking on social media with people who you admire. Start looking at their projects and just, you know, emailing them, slacking them, whatever, saying, oh, I was interested in this and, you know, do you have any internships or, you know, just start asking questions, just putting yourself out there. And I found that the more you speak at conferences, the more active you are online, the more opportunities will come to you. So that's kind of my advice for getting involved. And I would like to hear from Miriam because she's very active as well in this. Thank you, Christina. So as I mentioned, I mean, I didn't study your special as my major. I mean, life took me to your special. And at the beginning, when I started this special journey, I mean, I had two computers, one computer with the meetings and some other things, the computing. Okay, what they said and then taking note and then looking then what that means. I mean, what means one word that I hear that I never heard before. So all these terms in your special that people won't know unless you study a major in your special. So I don't remember which words, but I mean, for sure, I mean, there were tons of words that I didn't know. So I have to look for definitions and then see diagrams about how things work. So for me, that was something also that led me to start managing technical teams, even without having the special background. But then at some point, I decided also, I really want to go like a bit deeper. So decide to go to school again after a few years of not being in school. And then I went to study the geomatics. I studied one year of geomatics in the UNA in Mexico City. And that was also really cool because I got more involved. I mean, using tools and play with different things. So I think if you're interested, I mean, you will start looking for things that learning by yourself and also now there's so much material online. I mean, you can spend three lives watching tutorials from YouTube, I would say. And still, I mean, there will be some pending for the next life. So I think that's something really, really is a nice thing to have today. I mean, we didn't have impact because there were not enough tools. So also within your special, there are so many different things, so many different lines of action. So I would say that what is people passionate about? They should like check, I mean, who is attending, who is doing certain things, maybe look more information in that topic, maybe taking some MOOCs, maybe taking some classes, some other things. So you get more involved in those. For me, doing the volunteer work has been opening so many doors everywhere. I mean, you cannot imagine. And without this volunteer work, of course, I mean, sometimes can be tiring, sometimes can be exhausting. Sometimes you are like, should I rather be having a beer with my other friends who doesn't have nothing in the special more than writing this paper? But I mean, you have to find a balance in the ones you're happy with all things. So with your non-special life and also with the special life that you're living because that's awesome at the same time. So I think people should find their own paths in my case because I didn't have to dig for that and for that for finding what I like the most. But I think right now it's really easy. I mean, as I said, following people, checking online, what you like the most, and don't be afraid about asking people how they can participate with you in certain projects. So if you don't have this fear about asking, what can be worse? They would say no, but maybe they would say yes. So just go for it and ask. That's the thing. That's a very good advice to go for it. And it came from both you and Daniel just to put yourself in a position in which you try to make the connection with other people. And I don't want to under us. I mean, Maria is not here anymore, but it's not that point. The idea is that you have some formal education that obviously you need to follow. And there is this thing in which you need to better other people. You need to learn. You need to make yourself a professional in the technical point of view. Or when I say technical, I'm not referring necessarily to the technical skills, but professional point of view. But you also need to consider the fact that you have an alternate path that helps you get this career into your special. And I would like to hear Natalie's thoughts as well about this. I think you're made to that. I think it's the most important thing. From my side, why I'm involved in this special world? As I said in the beginning, I realize we need to produce data. Because my country is facing a lot of problem, a lot of crisis. And I understand that the just-person data is very... I think you muted yourself, Natalie. Sorry. I said why I'm involved in OpenStreetMap in the special world. Because I realize since the beginning that being involved here is the best way for me to help to support my country to solve. A lot of problem is facing. A lot of problem my community are facing. I'm also trying to get other people in this field so we can together achieve our good. So today we were able to set up for example two youth matter chapters in two universities. Which are also very active. Which are doing that. Who are training other students in universities. And also we were also able to set up a framework of exchange. Including government agencies, including public and private enterprises. So today we are going to talk together about just-person data. How they can work together to share data for solving our problems. Yeah. So I wanted to say that's a very good take out. The fact that we heard about mentorship. We heard about coming together. We heard about the fact that you have to expose yourself and go out there and look for opportunities. As you said you have to not, as Miriam said, you don't have to be afraid of reaching out to people. And these are all also collaboration. These are things that put forward a geospatial career. I would like to, before concluding, I have another question. And I think it's an important one to answer now in the panel. But we're also going to go into a discussion afterwards. With everybody that's on the banuless session and wants to join us for networking afterwards. So my question is, what point of your career you felt really down? Or you had this moment in which you didn't think you were going to survive geospatial, let's say. And what made you go over it? Maybe we can stop with Daniel. This is, it is really tough. I had a couple of times in my career, but I'd say that the, when I was getting my PhD, and it was like year five. And I was just not being able to work on my ideas or just everything was kind of being dictated to me. And I wanted to quit. I wanted to quit my PhD. You know, I had all male committee members. I just felt kind of like beaten down. Like, I don't know what I'm doing here. I don't, you know, didn't feel worthy. It was a whole imposter syndrome and everything like that. But I got over it mainly with the support of my family and friends and knowing that it would, it would help my career in the end. That I needed to get my PhD and finish it, get that degree and that I could then go on. And I just had to remind myself why I started the PhD in the first place. My love of science, you know, solving climate change, making the world a better place. So I just had to kind of remind myself, but there was a good year there where I wanted to quit. So that was a challenge. Thank you, Danielle. I think every PhD student, myself included, has this imposter syndrome. We all go through that. And it's really important to remind ourselves why we're here. The reason why I'm asking this question, maybe I didn't give you enough context is because on Monday we found a very heartfelt conversation. About different experiences that women had in geospatial. Some of them overcome them. Some of them couldn't. And I think it's important to hear your thoughts and your experience. Maybe it's a good one. Maybe it's not a good one. But this is not a perfect field and it's not going to be a perfect career. But it's important to see that other people with stellar careers have struggles as well. So I'm going to go to Miriam as well. Yeah, thank you, Christina. So I think from my side, when I have people training on the street, of course, I mean, is so many diverse backgrounds and diverse minds behind this initiative. So sometimes you think you're doing certain things in a good way for the benefit of the global project, but maybe people, they will see it in a different way. So for me, when I was coordinating with my team, some data input, I will be benefiting so much for adding roads and municipalities and boundaries and so many things. I thought, wow, I mean, we're going to be saving so much time invested in doing kind of manual mapping. I mean, thanks to this. So for me, it was like a no brainer. It was like an awesome project. And then of course we have to present it to the community and do some presentation regarding what was happening and also in case something fails, how we can revert it. So when I started seeing some people saying that we were trying to impose and we were trying to, instead of building community, doing things that were not benefiting the map or the community, I was thinking, I mean, this is positive. I mean, I don't see how there's a negative way about adding data useful for everyone in this case. So I think for me that was like, like, in one point I was like really low in like trying to see, I mean, I mean, how can they not see that this is positive. So for me, it was like, like a strong moment in the one say, should I continue with this because it's like exhausting, it's mentally physically exhausting that you find kind of walls in front of you. In the ones they don't let you go forward for doing things that you think are positive. And then if we think that maybe 200 mappers will be taking 20 years to add what can be added in a, in maybe two weeks of work. I mean, why we shouldn't go for it because I mean the goal is having the largest map of the world and then benefit people from it. And so for me that was like some starting moments in the ones I was not sure if I was the one wrong or what was happening there. I mean, and how, because also in the mapping world, I mean, they can be some harsh discussions in the ones they make you doubt about yourself or your capacity about if you are doing things right, or if you are also kind of blind and you don't see the overall picture. So I think that was a strong moment for me. This is, this is a very present recurrence team actually this imposter syndrome, the fact that you get undermined by other people. And as a woman you feel it a little bit more, I would say, and this is cause this kind of feels like discrimination in certain way. I think we had a question about this regarding the role of fader as part of the process and I think your answers kind of answered this partially answered this question. We could expand a little bit more if you want, but let's hear from Natalie as well and then we can get into this question. Yeah, from my side I can say that yes, last year when I was to to lead the impensity projects in Mali, so I was very afraid because I was wondering if I would be able to achieve the goal of the project. So every time I have to lead a project, my first question is, I might be able to achieve the goal of this project. So every time I'm afraid about this, but because I like also collaborating with other people, so I'm afraid, but I know that at the end I will be able to do because I will be collaborating with other people who can support me, who can technical support me, who can help me, who can advise me every time. So yeah, so what I can advise is collaborating whenever you are afraid or you are not sure to be able to do something. The best way is to ask someone to help you. This is what I do every time. We actually have a question about that. I would like to keep the conversation going, but this is what we wanted to achieve for the panel, so at least get these questions coming and also have some starting points for our discussion. I'm going to keep an eye and collect all the questions from the venue list and I'm going to thank our panelists for actually being here today and sharing all this information with us. Alina, I think she noted down some of the key points of this discussion and we're going to let everybody know. I just wanted to tell everybody that we're going to go for the next two hours into our work adventure environment. So if I can share my screen for a second, I wanted to show you what this one second. So what's annual is. The next next hour we're going to go into a venue less. We have their work adventure map. If you're registered to force for G you can access it access it from the main panel there's a women in social meeting there. You can customize your avatar and you can interact with us. We're going to meet in the purple room of work adventure. So once you enter the venue, there's a straight purple room and that goes to a jitzy meeting where we would like to keep the conversation rolling and actually interact a little bit more with our audience. So if you're if you want to keep discussing about what we can do about your spatial and how we can expand our career and how we can learn from each other. Please had to work adventure. And just as a heads up we're going to use slide those well it's an interactive tool that you can use online so if you go to slide dot do. It's going to get you to that tool and we're going to share this code with you again once we're in work adventure. So we can gather some input from our audience. I'm very sorry to cut this short, but I really thankful to Miriam Daniel Natalie and Maria as well for honoring our honoring our invitation and for answering all these questions. I myself actually learned quite a few things from this session so I'm very thankful to you ladies. And I've seen that there was a very animated discussion on venue list as well. And I would like to actually go back there and read some of the comments before finishing just to give you any. I don't know if you monitor that panelists, but the ladies there are very very keen to share some thoughts. We got some greetings from ladies of one set which is our twin sort of community that does amazing things for remote sensing ladies, but not only remote sensing them. They get involved in getting women voices elevated and bring many opportunities to them. We also got some comments on that see the actual community coming together and see that we can support each other. And that's really encouraging for whoever feels down at this day. We also got some very interesting comments about Miriam. I really hope you're going to check them. Basically praising comments Miriam. Nothing bad. And we also have a comment from Joe which says that donde cabe una, cabe dos. Which means for whoever doesn't speak Spanish is that where there's one, when one fits, two fits. So I think the most important thing is for us ladies to stick together and to learn from each other and to reach to one another at the points at which we feel mostly down or we need help. As a closing remark, I would like to let each of you say something that you think you want the audience to get out of this session. I'll go first. Okay. I just want to encourage everyone to keep working hard. You know, do the lean in sit at the table. Speak at the conferences make your YouTube videos. Just keep working hard at it and just know that you've got a community of women behind you supporting you and you can reach out at any time. Yeah, and from my side, I would like to say that having a more balanced world and also your special and open to special is not just a women thing. So we need more allies. We need more, more people who can also join us. I mean, in Miribor vocal in raising their hands and I say always this one when I try to participate is, is like how all these people who has been in industry for a long time. And they don't belong to these minority groups or these underfinal groups have can also they be supportive and also they can raise their voice. This is the hands to say, where are the women where are the minority groups. So that's also super important. That is not only or or job is everybody's job to do to make it happen. Thank you, Mary. Natalie. First, I would like to thank you, Christina and to thank all those great ladies. It was an honor for me to take part to this panel. What I can see is to let's let's work together. Let's work together to promote women involvement in just a special in particular and in tech in general and science. So let's work together to build a better world. Yeah, thank you. Thank you, Natalie as well. And thank you for participating this panel for sharing your experience and everything. And don't forget that we're heading over work adventure to talk a little bit more. So I'm really waiting for you. The link is no login required. So if you want to participate, you're welcome. See you there. See you there. Bye. Bye.
|
The session highlights the different aspects related to equality and diversity in FOSS4G. In the past years, women+ groups, including Women in Geospatial+, have amassed members from all over the globe, as well as from all backgrounds in the geospatial field. Through our work, we noticed increased interest in participation in the geospatial field, necessity for mentorship and being mentored, proactivity and a keen desire to learn and have access to skills and opportunities that were not being easily available to women so far. Through our work and the work of other sister organisations (e.g Geochicas, African Women in GIS, Ladies of Landsat, Sisters of SAR, GeoLatinas), we could determine that while the trend for equality and diversity in the field is a positive one, albeit slow, there is an imbalanced involvement in open source component of the geospatial field, with less women+ representatives overall. The main goal of this event will be to showcase the opportunities of a career path in FOSS4G and the role of leadership in the FOSS4G space by hearing the stories of a slate of leaders. How these leaders got involved in FOSS4G and what attracted them to this side of the geospatial field? What does leadership mean within FOSS4G? What are some of the opportunities and challenges that these leaders face today? How do leaders in this space see the future of the community? What opportunities are there for individuals that seek to get more involved in FOSS4G? These questions will be addressed first in a panel discussion, followed by an opportunity to connect with fellow geospatial women+ in a social event. The panel will also focus on the broadening diversity of technical leaders within the FOSS4G, GIS, and other STEM communities and how these shifts have been reframing these technical spaces and their impact. The social event will give the participants the occasion to meet, socialise and share individual experiences in an interactive manner. Authors and Affiliations – Kate Vavra-Musser Cristina Vrinceanu Laura Mugeha Rohini Swaminathan Track – Community / OSGeo Topic – Community & participatory FOSS4G Level – 1 - Principiants. No required specific knowledge is needed. Language of the Presentation – English
|
10.5446/57297 (DOI)
|
Hello everyone. This is Phosphor G 2021 and this is the Humakwa Custom. Next we will be having Paul Tratt and Dustin. Let me present you to them and they will be presenting a webinar in major visualization with open source development. So I guess that you can start. Okay. Thank you, Josie. Can you hear me okay? Good morning, everyone. My name is Paul Trout. I am the location 360 imagery lead for Bear Crop Science. Today I'll be joined by Dustin Samson from Spark Geo. Together we'll be presenting the evolving journey of image visualization with open source technologies at Bear Crop Science. The Bear Group has science at its core. We focus on many ways that science drives innovation and sustainability to help position Bear as a leading life science company. Bear was founded in 1863. It's more than 150 years old. A work colleague made this very nice timeline and I promised I would share but I also feel like they buried the lead. Not everyone gets to say their company founded a soccer club in one of the top five soccer leagues in the world. But Bear Crop Science is one of three divisions. Pharmaceuticals and consumer health are the other two. Day to day we're independent. We do periodically have, we cross over and work with the other groups but it's not normal. The company itself has a presence in 83 countries around the world. Bear Crop Science is organized into four global regions that we see here. I work out of the St. Louis headquarters for North America. But we work with all four regions every day in our job with location 360. Location 360 is the team that I work on as part of the Global Data Assets Group. As the name suggests, if there are coordinates or location data involved, location 360 will have a hand in the process. Location 360 is organized by the three verticals that we see here. As I mentioned, we're based in St. Louis. Our team is 100% remote, however, and we're bringing people on in different countries as well. We work almost entirely in the AWS cloud with Kubernetes and Argo-based workflows and CI CD. We also work with the Google Cloud regularly. Our daily work involves satellite, UAV, and non-spatial lab imagery. Other teams in location 360 work with IoT sensor data, macro and micro weather data, just to name a few. We manage more than 300 spatial layers and a GeoServer Postgres implementation. And in the spirit of open source here at FOS4G, I have to say I've been with the team for almost eight years and have seen location 360 grow from a four-person ESRI centric team to 100% open source geospatial centric and a team of more than 70 people today. Open source has greatly contributed to our growth and allowed us to consistently drive value and improve performance. Later this morning, I have a second presentation describing how our standardization on cloud optimized geotifs or COGS combined with the integration of spatial temporal assets catalog or the stack specification has provided an economy of scale to register, search, and access all spatial imagery for the crop science division. And this catalog based workflow has really helped our team bring new and important business value. But today we'll be focusing on the discovery, visualization, interaction part of that workflow that COGS combined with stacks has provided. We've been on a multi-year partnership with Spark Geo to optimize our visual products for the internal bear application framework, what we call velocity. This last project with Spark Geo has been an 18-month project where we've been standardizing on the stack model, stack specification with COGS. But it really is the catalog centric view of this workflow that has fueled this for us. And you'll hear Dustin refer to the diagram show the Imagine API or the Imagine platform. For the purposes of this presentation, Imagine API is synonymous with imagery platform or the Image catalog. While we capture many types of imagery, the content for what we're describing today is for Geotips only, which comes only from our satellite and UAS pipelines. Our legacy process, which Spark Geo also helped us with, and we learned a lot from, it set the stage for what we have today. But it was pre-rendered tiles at many different scales for every image. And it would use Amazon EMR. And we had to pre-render every tile just in case somebody might look at it one day. And so we really wanted to leverage what we learned from this implementation and evolve into something that was more performant, better search, had a standard process that we could repeat for all types of imagery. And the guidelines that really drove this was standardizing on the use of COGs for all of our Geotips. We swapped out the pre-rendered tiles in favor of dynamic on-the-fly rendering, which Spark Geo was very helpful in showing us how to implement that. We swapped EMR processing for Kubernetes with Argo workflows. And we have a search plugin that I'll show at the end after Dustin, which we've evolved from using C-CAN base catalog for resource URLs to using the stack specification with TMS. And now I'd like to hand it over to Dustin. Thanks, Paul. I just want to do a quick introduction. My name is Dustin Samson, and obviously I work for Spark Geo. If you're interested to find out more about Spark Geo, please visit the website at sparkgeo.com. And if you're looking for a change, Spark Geo is a great place to work, as well as Bear. But please check out our job website if you're interested. So as Paul mentioned, he talked a little bit about the image catalog and what it is. But what I wanted to dig into specifically is the global imagery pipeline. And this is a piece that we worked alongside the BearCrop science team in creating. Here's an overview diagram of the global imagery pipeline, which roughly represents the different services and how the direction of data flows within the pipeline. Some of the goals of the pipeline was to gather and store image metadata in a stack catalog, create cloud optimized geospatial images, or COGS. But ultimately, we wanted to make the images more discoverable and allow for efficiencies by other systems from such images. Now I want to talk through basically the different stages of the global imagery pipeline. So starting at stage one, the global imagery pipeline is fed images from existing data pipelines that are previously to this pipeline. Existing data pipelines are represented by this black box in the diagram. These data pipelines can vary in shape and size. Some fetch images from third party sources. They may be removing clouds from images or stitching drawing imagery together. But one of the common, a couple common tasks that each of these pipelines do is, one, destroy the output of those images into an F2 bucket, and also publish a new image message so that other systems are aware that there's a new image available. So the second stage is a service that is listening for these new messages and that are being produced by these existing data pipelines. Each message that the service receives, it will create a stack item and then send that stack item to Imagine API so that that item can be added to one of the stack collections. For this project, we ended up creating three custom stack extensions that were needed by various parts of the pipeline to help with image processing, image rendering, and searchability. But I'll talk about each of those three extensions as they come up with in the pipeline. So the first extension comes in here. Yeah, images are generated from the existing image pipelines and are stored in source image stack collections. These are identified by having this first stack extension. As the name suggests, source image stack collections refer to the geospacial images that can potentially be processed by the gold energy pipeline. So next, I want to talk about what happens after the stack item is added to the Imagine API. So when a new stack item is added by the Imagine API, a create item action message is published out by the API, allowing other systems to listen to these changes that there's been a change to a stack collection or a stack item. Other specific action messages that are important to the gold energy pipeline are item updates and item deletes as well. OK, now that we know what's happening here after an item is added to the Imagine API, now I want to go on to the next stage. So the next stage in the pipeline is the global image service, which basically is listening for these action messages that are published by the Imagine API. The service only cares about the action messages related specifically to source image collections. So when one of these messages are detected by the service, the service will then query the Imagine API and to find any related product image collections. And from the attributes in the product image collection, the service is able to create what we call an ingest job definition. This job definition is then created and then added to one of the processing cues to be later processed by the ingest service. In the next slide, I'll talk about what a product image collection is and talk a bit about the second stack extension you created. So product image collections are collections of items related to images that I put in quotes derived from source images. We're not actually creating an image on disk, but we are processing the image in order to, I shouldn't say that. Yeah, it's images that are how the images are being rendered from source images, not an actual product image on start on disk. So this type of collection also contains the second stack extension, which helps identify a collection as being a product image collection. And as well, it includes a tie back to the source image collection that basically helps with a bit of the ancestry of going from this product image was created from this particular source image. So maybe to help clarify this a bit, an example of what this would look like. So a source image collection may contain a series of sentinel images. And products that can be derived from sentinel images may be an RGB or an NDVI product. And then those would be stored in the product image collection. This second extension also includes details needed to process the source image. Things such as S3 location where the image is stored, the product algorithm to apply to the source image, and the expected output data type. The final pipeline stage is the ingest service. And this service has five tasks. The first is obviously pulling the job definition from the cues, the different processing cues. Based on that job definition, the service will then fetch the source image from S3. And if the image isn't already a cloud optimized geotip, it will convert it into one. So from that, we'll go ahead and start to convert it into one. So from that point, it needs to do some calculations. So first, it'll calculate a bounding box of the product image. The one thing different about this bounding box, which we call an active footprint, is it excludes any no data values. So in some cases, the polygon will look like Swiss cheese. It also calculates statistics on this product image. So we'll go ahead and do some calculations, histograms, percentiles, second information. And then the last calculation will be generating a thumbnail of what the derived product image will look like. From here, this new cog image is then uploaded to S3. And then finally, a new product image stack item is created and sent to the imagery pipeline. So the item can be added to the appropriate product image one thing to note here is that regardless of how many derived products that you create from a source image, there's only ever one cog image being stored in S3. So let's next talk about the last stack extension. This last extension is used with the product image items. And it has two uses. One first, it's used by other systems to help refine image searches. And secondly, it stores the details on how the image should be rendered. So last but not least, the global image survey API. It's not really part of the global image pipeline, but it's part of the overall imagery platform. And so the way it works is a system or user will make a request for a particular image tile. The product stack item is included on every tile request. The server then fetches that particular product stack item from Imagine. And then once it receives all those details on that particular image, such as the product algorithm, it will then apply it to the source image, that algorithm to the source image. In the case of the NDVI, it'll know the band it needs to do in the formula and needs to apply to it. And also with the NDVI example, it will have a default color ramp to apply to that particular image. And then lastly, while rendering the tile, it uses the statistics that are stored within the item to determine the min and max pixel value range. So whether it's using the min and max that were calculated earlier on, the mean standard deviation, histogram information, even percentiles. So lastly, I just want to do a quick shout out to some of the projects that helped us create the global image pipeline. I wanted to say thank you to all the people involved in creating and maintaining these projects. This is a small list. Obviously, we used a lot more libraries and applications for this. Yeah, so I want to just turn that back over to Paul. OK, thank you very much, Dustin. This is a different view for a non-technical audience of some of the things that Dustin just described. And the next part we'll be focusing on is the interplay between the search tool, the global image research tool, and the application with the catalog and TMS. This is another view of the same thing that Dustin just described. Our global image research tool is a plug-in for any of the JavaScript applications within the framework. And we're able to search by sources or products. And we have a time filter, as well as the map itself acts as a spatial filter, which you can enable. Show me everything, or just show me things that fit within the map frame. Notice here our count is 12,000 sentinel for a year within this map frame. When we change that from sources to products, and we say just show me RGB, now my number is 6,100. Because it's basically cut in half because the other 6,000 are NDVI's for sentinel. But these tiles that load in the search, you haven't added any of these to the map yet. These are just available to the user. And if the user needs more info, we can right-click and expose a link to go back into our general UI in front of our stack catalog of imagery. And you can see all of the metadata associated with that image, including some of the info that ties it back to the source that Dustin was describing. These are the assets that belong to, this is essentially an item in the stack catalog. And as far as the, I can now zoom, and Dustin mentioned Swiss cheese, this is that active data footprint of this image. And if I were to hover over another one, one of these other footprints would highlight to show me where it is. But when I zoomed to that image, to that footprint, I can then click the, essentially this is a map control. We can have up to four different concurrent map controls. So you would see four different map squares in here. And I can choose which map control to send this image to. But this is within the item in the stack catalog, this Swiss cheese active data footprint that Dustin described. It's stored as a multi polygon here in the item, which is available through the catalog. And then just to, as I mentioned, to add the image to the map control, it's a simple click. And now what that actually does when we click that blue square to activate the image that we've selected, it actually hits the TMS endpoint, which then does the range lookup to the image in S3. And what is pulling that we, this is actually the product item in the stack catalog. It's just a URL template for this resource that has our rendering baked into that resource with the various extensions in the URL. So this asset, all it really is is this URL, which exposes it to a suite of applications through this search tool that SparkTO has developed for us. So the goal is business value, and we really lean into machine learning. But just this visualization that we've shown today has already provided some real business value for our users regarding all of our imagery and R&D pipelines. And I'd like to thank all the, these are the team members on both sides for Bear and SparkTO that contributed to this effort. And that concludes our presentation. Okay, thank you very much Paul, for testing for your presentation. It was amazing. We have a few questions. So the first one is, how do you serve the stack data itself? I know it's that data store. Is it something like a post-GIAs backing on a stack server? So I'm going to go ahead and show you how to do that. GIAs backing on a stack server? For images, we're using real compiler to serve up images and it's pulling the images from an S3 bucket. All right. The next one is, could you talk a bit more about the imagine server? Is that something you created yourself or an open source component? It was a legacy system that was one to one. And if you notice, we had many assets. So we leveraged the staccato Java server implementation of the stack specification, which has elastic search on the back end. And that is our core service. But then we rolled our own API in front of that in Python to connect to the stack server, a staccato on the back end. But so the Imagine API is a homegrown system, yes. Okay. Thank you very much. And we have time for another question. Sorry, I meant the actual stack JSON stuff. I think this is related to the first question. But maybe it's not a huge JSON file somewhere. It comes dynamically from something, I guess. Yeah. So our API, we have a Python API that sits in front of the staccato server, but it's the full, it's just a stack specification that is returned to any client that requests it. We can access it via Postman. We can access it via... We built SDKs. We have a command line interface. And they all interact with that same stack specification, which is that JSON return. I hope that answers your question. Okay. Thank you. We still have time for another question then. What products are there that NDVI do you provide to users? So we've had EVI in the past, a different type of index. We do single-band crop height from LIDAR. We serve that up as well. And we apply a color map to single-band things like that. So it's any type of band calculation that can be performed on a single image at this time is what can be rendered. All right. Thank you so much, Paul. I think that's all the questions so far. So we will be finishing here. Thank you very much for coming to Foss4G and for your talk. And we will be seeing you around. Thank you all very much. Thanks, Dustin. Thank you. Thanks, Joshi. Bye-bye. Bye-bye.
|
Bayer Crop Science has engaged in a multi-year collaboration with Sparkgeo Consulting to deliver an evolving set of spatial imagery search, discovery, and visualization capabilities built on top of open source geospatial software. Initial solutions integrated CKAN, Geoserver, and Geotrellis to pre-render custom tilesets for derived analytic outputs. This process proved difficult to scale with increasing ingest rates and led to standardizing imagery pipeline outputs on Cloud Optimized GeoTIFFs(COGs) with rio-tiler, pyproj, GDAL , Shapely, and Rasterio for processing to define dynamic rendering visualization products in a newly developed STAC-compliant catalog. The Sparkgeo team has written a custom Global Imagery Search tool for our corporate OpenLayers-enabled application framework which combines event-based per-scene visualization processing with STAC search results and TMS-to-COG range/column searches. The Global Imagery Search tool also allows client/application side dynamic color map rendering. This presentation will describe the evolution from tilesets to dynamic rendered tiles and the customizations within STAC-collections needed to achieve this. This presentation will describe the implementation challenges of scaling inputs and processing pre-rendered tilesets for application visualization and how the decision to re-direct to a COGs + STAC cloud implementation has met our scaling objectives. Authors and Affiliations – Martin Mendez-Costabel – Bayer Crop Science Paul Trudt – Bayer Crop Science Will Cadell – SparkGeo Consulting Dustin Sampson – Sparkgeo Consulting Joe Burkinshaw – Sparkgeo Consulting Angelo Arboleda – Sparkgeo Consulting Track – Use cases & applications Topic – Business powered by FOSS4G Level – 1 - Principiants. No required specific knowledge is needed. Language of the Presentation – English
|
10.5446/57298 (DOI)
|
I'm going to talk about the announcement for the QGIS project for which I worked in the past few months with my mentors that are really thanks, that are listed here and that are Mark Indobius and Peter Petric. So I will touch the following points and at the end of this presentation I will open QGIS and offer a little demonstration of how the feature that I've developed works and how it can be used. This in the photo by the way, it's me of course. So before diving in the more technical part and describing the new feature that I developed for the QGIS, QGIS project, I want to mention that this period has been really challenging for me but I had the opportunity to learn a lot of stuff and to work with the experienced and good developers as mentors and to get in touch with the QGIS and Othgeo community and it was amazing. Some of the stuff with which I worked are listed in here. So QGIS is mostly written in C++ and partly in Python and with the help of the QT framework and it is a desktop application used to perform analysis in the GIS field. So for my GSOC I had to improve my Git and GitHub workflow and I work on the Raster calculator that is an analysis tool that is available in QGIS and allows the user to perform calculation on the basis of existing Raster pixel values. So for example, if I have two single-band Raster layers in QGIS and I want to have an output layer which pixels are let's say the sum of the two initial Rasters, I can use this Raster calculator. So before this work, so before my GSOC, it was possible to use the Raster calculator but the output was written on a file and this can be a problem because in a usual workflow when dealing with Raster analysis, it is possible to use this tool, so the Raster calculator, multiple times and therefore to add a lot of undesired and intermediate files save on a disk. And in order to avoid this, I start to work on a data provider that was able to perform the computation, so the task of the Raster calculator and to show the result in the QGIS application without the need of hitting this space. So concerning my contribution, so concerning this feature is in fact a data provider for Raster data and it will be seen in the next version of QGIS hopefully. And for the user in the Raster calculator dialog, it's a simple flag with a text I made it where you can, if you want, add the name of this let's say on-the-fly computed Raster. So as I mentioned, this feature allows the user to perform the same tasks of the existing Raster calculator but without the need of saving a file on the disk. And in this sense, I called it a virtual Raster provider. It is possible to take advantage of this functionality also via also by with the Python console in QGIS. And yes, as we'll see in the final demonstration, they output it a Raster layer with older Raster layer properties and it will be needed in future analysis, it can be itself saved on a disk in a file. So since I had some time left, I also improved the existing Raster calculator and I added the if function that you can see in the right rectangle and that allowed the user to write and compute the expression written in the other right rectangle. So it is now possible to write some conditional statement in the Raster calculator and yes, to develop this announcement, I had to work with Parcel and Alexa written by the original developer of this tool. So finally, I also had the time to think about some future improvements of my work. One possibility is to take advantage of the OpenCL integration for better performances for the feature I developed, of course, since OpenCL is already used by the existing Raster calculator. And another possible announcement that concerns more the Raster calculator itself and not my feature is the possibility of output a Raster with multiple bands, with the declaration of course of multiple formulas. Since right now, the output of the Raster calculator is only one band Raster. So before say goodbye, I would like to show you how this new feature work in QGIS. So I will open QGIS that is already open. I added a layer, a Raster layer from the test data. I can inspect the, it's a multiple band, so it has nine bands. And for example, you see that at this point, which coordinates are here, the value is 127 for band 1, etc. So I can open the Raster calculator dialog and I can flag these checkboxes. So create on the fly Raster instead of writing layer to disk. And I can choose a name or take a name from the expression. So let's say that we want to auto generate the name from the expression. So the expression can be really simple like Landsat band 1 plus Landsat band 2 and yeah, that's all. So this is the output. So the first band plus the second band output this. And if we inspect the values of this new Raster that by the way has the name of the formula, we can see we took the sum of band 1 and band 2. So in some point the result will have the sum of these two values. Well, if I take these exact coordinates, but by the way, we can see that this value is the sum of band 1 and band 2. Yes, I can also show you the if function that you can see here. So there's this button. And I can write an expression like Landsat band 1 is greater than 126. I will explain as up. So what does this mean? I will inspect every pixel of the first band. And if the single pixel of the band is greater than 136, the output Raster pixel will be 100. In other way, in other case, it will be 10. I'll use the Raster calculator as it was used before my GSOC. So saving file, so test Raster and I will press save. So now it is possible to perform the computation and I will add another layer on the top of QJS. And we can see now that I have a Raster layer with two values. So 110 and if I inspect this Raster, some place it will be 10 and in some other place it will be 100. So okay, I finished my presentation. I really thank you for your time and I hope this later will be used by QJS, QGIS user and I hope to remain in this amazing community for a long time and to develop some other feature. Thank you for your time. So thanks a lot Francesco. Yeah, thank you. Thank you Luca. Really good job. There is no question from Venue. But everyone, it's your work already in QJS source code. Yes. Did you do a pull request and it was accepted? Yes, yes, it was accepted. In fact I've done more than one pull request. So they were all upset. So one for the main feature and the second for the conditional statement for the Raster calculator. And I think that in QJS 3.22 it will be possible to use this tool. Okay, good. Thanks a lot. So apply again for Google Summer of Code if you can next year. Yeah, I won't be a student but I will try to do something anyway. Okay, thanks a lot. Bye. Bye, bye, bye. So now we have another video and it's related to POSJS project and the student will be not online also for the question. So we will see the video and after that there will be another one. Hello everyone. I am Han-Wong from Peking University. It is my pleasure to present my GSOC work with POSJS here. First thanks to my managers and the community for the help. My project name is Implement Thoughting Methods for the POSJS data types before building Geeks Index and the contents are as follows. The first is preliminaries. Here are the basic concepts of Geeks. The Geeks as known as generalized search tree is a generalization data structure of the variety of disk-based height-balanced search trees and it is essentially a balanced tree of variable fanart between KM and M. Its non-leaf nodes and leaf nodes consist of a predicate P and a pointer PTR to the tuples. In the non-leaf node, P is true when instantiated with the values of any tuple reachable from PTR and in the leaf node, P is true when instantiated with values from the indicated tuple. We should know that Geeks is a high-level abstract definition. With these key tree methods implemented, it will become some actual index data structure. As you can see, P is for a predicate, Q is for a query predicate and E is an entry, P is a set of entries. The most important key methods are consistent union, penalty and explicit. Return force, if P intersects with Q, can be guaranteed unsatisfiable. The union returns some predicate R that holds for all tuples stored and the penalty returns a domain-specific penalty for inserting E2 into the subtree rooted at E1 and the PICS split given a set of P or M plus 1 entries. Geeks P into two sets of entries P1 and P2 and there are two important tree methods, R search and insert, search all the tuples that satisfy Q from root R and the insert returns new Geeks resulting from insert of E at level L from root R. The Geeks can be implemented as B3, B plus 3, R3, HB3, RD3 and etc. So we can say that with the well-defined consistent and penalty functions, we can specify a new structure to index our own data type. That's very important in multi-dimensional geometry cases. And the next is implementation. In Postgres 14 and later, there are two major key index building strategies. The first is start with an empty index and insert all tuples one by one and the second is sort all input tuples, pack them into Geeks.leaf pages in the sorted order and create downlinks and internal pages as we go. This builds the index from the bottom up similar to how B3 index builds with soft support API provided by Postcircule. From the description above, we can say it is obvious that we have to define an order for the tuples to sort them in advance. In Postgres, it stores geometry data types as box2df structure, which remember only the bounding box of geometries instead of all the details of it. For instance, two-dimensional geometry uses operator class like Geeks to geometry of 2d to build Geeks index. In the Circle file, it is necessary to declaration a soft support function signature like Geometry Geeks Soft support 2d with function number 11 to bind with the function definition in the C file which named like Geoseralize the Geeks Soft support 2d. And in the C definition, geometries work as a Geoseralizer structure, but in the index as we mentioned before, there are stored as a box2df structure. As you can see, if we want to activate the pre-sold index building method, we have to define a soft support function like this. We should provide attributes function comparator, a brief converter, a brief abort, and a brief full comparator to the soft support structure we get from the database system. The most important function here is hash a brief convert. It converts a data type, besides box2df or other, into a 32 or 64-bit hash code, and the order of the hash code indicates the order of the geometry in the page. With the user defined order, we can compare data types and sort them, pack them into pages which is faster than create an empty tree and insert elements one by one into it. In this case, we apply the center point of box2df of geometries to generate a 32 or 64 hash code for this tuple with the trick of using the float point number of its location directly for beat-wise computation. We all know that machine pages are one-dimensional arrays in the high level, so it is necessary to transform multi-dimensional geometry into a one-dimensional order. Let's focus on this. There are two major space-filling curves here, molten code or the order and the helper curve. They can define an order between high-dimensional elements, just pass them one by one. They can fill a space with any precision you want. Like the left figure, the z-order curve can be refined as required. What's most important is, they maintain the proximity that exists in high dimensions in the one-dimensional case. That is to say, objects that are adjacent to or more dimensions will be adjacent in the order of one-dimensional curves. This feature guarantees the space-filling curve will work like a special index to make a good special proximity on objects, and approximately leave no content distribution before building the tree index to accelerate the process. And finally, here is a fast helper implementation with magic beat-wise computation from the link below. Given a d-multiply n-beat number describing the index on the d-dimension helper curve of order n, split the index into n-groups ij of dbc, starting from the most significant beat, each of these tuples i0 through in-1 describes both an author to recurs into determining one beat for each of the coordinates axis, which will group together as xj of the point on the curve, as well as transformation that is to be applied to the next recursion level of the curve. In this equation, where q is a function mapping d-index beats to a north end, t is the function mapping d-index beats to an element of the transformation group of the helper curve, and star is the operator of that group, if qt and star are known for a particular dimension, this yields an algorithm for mapping offset on the curve to the corresponding coordinates and vice versa by successfully computing the product of transformations at each recursion level. What's more, almost the operations are implemented in a beat-wise way, which makes a method fast enough to satisfy the runtime performance. I did two major tests on this implement. The first is search a small patch in the data area, and the second is traverse the data area with the small patch sequentially, and here is the result. In the first experiment and the second experiment, building time, plan time, buffer heat number, and execution time are measured in the tests. From the tests, we can see that no index k spends no time on creating index but suffer a lot in the query process. Guests with result methods spend less time on building index than the default guests, which is one-third to one-fifth of the default time. But in buffer heat number and execution time, which means the query process, the guest with presorting method seems to be a little worse than the origin. In fact, these results are within expected expectation because the dimension reduction in the index building will cause information loss, which can accelerate the building process but may cause penalty in the query process. At last is conclusion. The conclusion is that space-filling curve hash function does improve the index building performance, but presort index with hash functions may lead to a query performance loss. In the next, we will improve the query performance with an optimized hash function, not better than the original Helbert or Helbert or the other functions. We will implement an n-dimensional hash function with this bitwise computation or other magic computation. And that's all. Thanks to my mentors, the community and the GSOC staffs again. I am going to present my project, VRP with Vroom on the database with VRP routing, that I did during the Google Summer of Code 2021 program with PG routing under the OSGA organization. So about myself, I am a final year student at IATBHU from India. I participated in the Google Summer of Code both last year as well as this year as a student developer for PG routing. You can contact me on this email address. So this is the agenda for today's presentation. I'll start with an introduction to VRP routing, move onto Vroom, and then I'll explain my contributions and how to use the functions followed by the conclusion. Starting with what is VRP routing? So VRP routing is basically VRP plus PG routing. So we have several VRP category functions in PG routing. We created a separate repository called VRP routing extracting those functions. It's like solving VRP problems over postgres. These are the available functions in VRP routing, solving the pickup and delivery problems, solving the problem with one depot, and many more. And these are the three functions that I added during the Google Summer of Code program. So first moving on to VRP and Vroom. So VRP are the vehicle routing problems. So they are the NP-hard problems, meaning that the required solution time increases exponentially its size. These are the optimization problems. So given some vehicles, some depot jobs, the task of VRP is to find an optimal route for the vehicles satisfying any constraints that we give to it, such as the time window constraint, capacity constraints, etc. Moving on to Vroom. Vroom is an open source optimization engine written in C++ and it provides very good solutions to the vehicle routing problems, such as these problems like pickup and delivery problems, VRP with time windows, capacitated VRP problems, or any mix of these problems. So we give input JSON to Vroom containing our problems. Vroom solves it and gives back a JSON containing the solution to our problem. Let's move on to my contributions. So I added the code, the documentation with the dog queries and the PGTab test for the three Vroom category functions corresponding to the use cases of the user. So basically I ported the Vroom functionality to VRP routing. So these are the three functions that I created. We will look into them in the later part of this slide. So the benefits to the community, it is always easier to work with databases than with JSON. So one can easily update or show the data and route over it on the database. And that is quite easy as compared to creating a JSON modifying it and working with it. Also we wanted some standard library for solving the VRP problems, similar to how the PG routing used pushed and that was made available through these functions that I created. Through this, the Vroom users will also increase and it will be easier to track the bugs and the issues. So therefore it is definitely a benefit to the community. Let's move on to how to use the functions. So first looking at the terminologies. So there are two types of tasks, the jobs and the submit. So jobs are the single location, pickup and or delivery task. So say these are the three kind of jobs that are possible. They need to be a single location and it is either a delivery in that case, the depot will be the pickup and the job will be the delivery location. So the job can be a only a pickup job. So it needs to be delivered to the depot and it can be both pickup and delivery job at the same location. And then we have shipment shipments are the same route pickup and delivery task. So it must have a pickup and a delivery and it can be at the same location or different location. And these are some of the properties of these jobs. They can have service time amount, skills, priority or time. Then we have the vehicles. Vehicles are any resources that either pickup or deliver the task. So they can have other constraints like capacity, skills, time windows, speed factor and so on. Lastly, we have the time matrix that is travel time between all the locations. So say we have the full location and these values represent the time to travel from location id 6 to location id 8. So we can format time matrix with it. So now looking at the functions, these are the three functions that I created. So all the arguments are in the form of a text. So say this is a function we have PBRM. We can pass the jobs, the skill, the jobs time windows, the skill and then shipments skill, shipments time windows, the skill, the vehicles skill, the breaks of the vehicles and the matrix skill. And it will return these sequence approach that is the solution to the problem. So we have this function for only the jobs and this one for only shipments. Now let us solve a sample garbage collection problem using the function stat I created. Consider this is the dump site where the two vehicles start and enter journey. Each vehicle can hold up to 50 kgs of garbage. They need to pick up the garbage from these three locations. So these are the distances time taken for traveling and each shipment has a service time of five minutes as well as a time window. So these are the shipments that we create. We need to pick up the shipment from here and deliver it to the dump site. So we create these shipments with their id, with location, the service time and so on. Similarly, we create the time windows for the shipment and this time window can contain time in any quantity say in seconds or minutes and you can choose a base time considered as zero if you wish or you can use the absolute time. So then these are the vehicles. So we create the two vehicles with the start index, the index, index as the same, the capacity and the time windows. And lastly, we create the matrix. So these are the four coordinates. So we create the matrix of the four coordinates containing the cost to travel. Then we execute the SQL query, select start from room shipments and we pass all these parameters, the shipments, the vehicles, the time windows, the metrics and then we give it to the room. So the data that we pass in the SQL query goes to vroom, vroom solves it and it returns back the data to the user. So these are the two vehicles. These are the steps like whether it is a start or an end or pick up all delivery. These are the ID of the task. And then we have several times like arrival time, the travel time, service time, waiting time or any load. So let's look at the visualization of the result. So this is the final result that we get. The blue colored vehicle starts at 10am. At 10.30 it reaches this point with coordinate 3. This takes 5 minutes and then it goes to coordinate 8 at 10.54. It pick up it and then goes back to the dump site. Similarly, the red vehicle starts here. After 17 minutes it reaches here. It spends 5 minutes to pick up this and then it goes back to the dump site. So basically this is the visual representation of the result that we get here. That was all that I did. Let's end with a future scope of the work. So we can basically use these functions and create specific functions based on different use cases of the user. There can be many other codes in the future based on any use case. That's all from my side. It's an open source project. You can look at the code here if you want to contribute or you can even start the repository if you find it helpful. For more information you can refer these links for the website for the GitHub repository or the documentation if you want to have a specific look at the function. If you want to develop you can look at the developers documentation and this is the repository for whom. Also for any further help you can reach out to me at this in the next case. Thank you. So thanks a lot Ashish. Thanks everyone. So he's a student from ITDBU Varanasi India and thanks a lot for your work. Do you think to apply again for GSOC in the future? Well I can't apply. Actually we are allowed to participate only twice and I participated last year as well as this year. Okay. Okay. So you can move to mentor. Yeah. Sure. Okay. Perfect. So thanks a lot for your work and see you around. Next one is Venit Kumar. Student of NSAC at Kolkata India and he is going to present his work. Please can you share your screen? Venit, do you have any problem? Yeah. I can't take it away. Venit, I cannot tell you anymore. There is some problem. So I'll remove this one. Okay so there was a problem before. Okay. Now I can see. Yeah. Now it's visible. Thanks a lot. Okay. So you can start to take a while. Okay. Hi everyone. Today I'm going to present my project named implement age-colouring algorithm for PG routing by the Bush graph library that I did during the Google summer of code 2021 with PG routing under OSO organization. First of all, I want to thank my mentors because without them it wouldn't be possible to complete the project on time. Next, little bit about me. I mean. Sorry. We cannot see this right. It's a black. We see a black screen. It seems that it's loading. You can see this right. Yeah. Try to share the entire windows, not only the screen, not only the windows. Is it visible now? Yeah. Now it's visible. Okay. Now it's perfect. Thanks a lot. Okay. So a little bit about me. I'm Vinit Kumar. Currently, I am pursuing my bachelor's of technology in computer science and engineering from methaishubas engineering college Kolkata India. I have participated this year in Google summer of code under PG routing as a student developer and moving further. This is my two days agenda. I will talk about my contribution about the project uses and application about its future scope and finally the conclusion part. So moving on. Yeah. So this is my contribution. I have added is coloring algorithm by the Bush graph library to PG routing. I have created a function PGRH coloring, which is used to perform query out of the graph data provided. We'll be talking about this in detail in the moving forward in the coming slides. I have added documentation, doc queries and tested my code with a PG tab unit is. Yeah. Talking about the benefits to a community. The function I have added it at the more functionality to easy routing. It can be like it helps other developer to integrate it with other routing algorithms. But from that it checks whether a graph is bipartite or not. And the condition for that is if the number of color is to color the is is is equal to the degree of the graph, then it's very bipartite. Otherwise it's not talking about its application. It has application in tracking signals as you can see in the image images. Now you talk about we will talk about it's usage. How can I use it use it's application. So we will talk about the algorithm in detail. What he does what he does is it assigns color to every edge of the graph such that no two adjacent edges have the same color. It is applicable only on the undirected and loop free graph. Otherwise if you perform the query on other graph like if it's not undirected it will give a message that the given graph is not it's colorable or so. It has many real world application such as in traffic signaling in fiber optic fiber optic communication in scheduling problem like a process at scheduling etc. Talking about it's time complexity it's e into b where is the number of edges in the graph and b is the number of vertices in the graph. How anyone can perform query out of it like a query out of it. So we create a table like this having your parameters like this id so target cost we will make graph with this property and insert the data into the table according to the parameters we have passed and here we have taken like we can see insert you can see here insert part we have taken graph with 12 edges as an example you can see moving on. Here is a 2D format of graph you can see what it looks like if you draw it on a plain paper or something it looks like this and what we did it we assigned a particular is number to every edges to each edges of the graph and then apply the algorithm on it and the algorithm after applying the algorithm the algorithm will return color of each edges what color each edges have got we are using number to represent color like we can say anything like a color one is blue color two is pink or something we are using number to denote the colors. Moving on as you can see the vertex and two and in the parenthesis there is color two is the age number and the five is the color number like the age two is colored with color number five. Coloring you start with color number one like the first color we choose is number one now we perform the query using pdr age coloring that we have made earlier which returns age id and colored id of the graph we can see here we can see here the age id and their corresponding colored id in each row like age one is colored with color one age two is colored with five and so on there are 12 ages in the graph earlier you can see here 12 ages so we there are all the ages are colored with their particular color. Moving on yeah future escape what should be the future scope of my project so mode function can be implemented for different age cases like for coloring the distributed graph. What distributed graph is it is a graph where the data is not on one system it is distributed over more than one system so we can use that algorithm in the future and also for parallel also we can use also we can import some parallel algorithm for pg routing which helps in computation over large scale graph. So moving on okay so this is the detail about my project I will share the link later after my speech finishes you can see that this is source code documentation and the wiki page about my project you can visit this for more details. Here is the pg routing you can explore it and contribute it I am sure you will enjoy it because during this moving someone forward I have enjoyed a lot contributing to this project pg routing project because mentor are very awesome and help me at every step and this will me as well all my silly mistakes and all. Yeah more about pg routing I will share the link after the speech you can visit and explore or you can connect me on this email id that's all for today thanks. Thanks a lot Venit good project and do you think to apply again for GSOC? Yeah yeah yeah this was our first time and I will be a student in 2022 as well so I will be looking forward to it. Again with the pg routing? Yeah yeah yeah pg routing yeah. So see you around. Okay thank you. So the next speaker is Harian Kenchapa Gull I hope that I tell it correctly he is from Pune University from India and he is going to present his work on MapMint. Right right. Okay. Cool is the screen visible? Yes can you start in presentation mode? Sure sure. Okay perfect yeah perfect this is for you. Yeah thank you so hello everyone I am Harian Kenchapa Gull currently I am a third year computer and science engineering undergraduate from International Institute of Information Technology Pune India my proposal to implement 3D scene visualization support to MapMint was accepted to Google Summer of Code this year by OSU organization so today we will be having a look about the project details including an overview about MapMint and Zoo project along with the technology stacks involved in the same so a few details about the formats and their appropriate terms wherever necessary will be given details so this will be followed by getting to know about the cheetah templating system and preparing necessary documentation involved in the project. So MapMint is a web based GIS system which is designed to process GIS data online so in order to make use of this special information in an efficient and flexible way the SDI infrastructure geographic data metadata tools and the users are connected through a framework in an interactive manner so talking about the dashboard of the MapMint so it consists of three major sections so overview or users and the set that is stiller so talking about MapMint so the overview is a section which is visible content in the page load users are talking about the user management so users allow the management it allows to see various functions present in the dashboard and to access various data present in the data store then we have parameter that allows the management of application settings then speaking about the functionalities it allows various tasks related to the implementation of SDI from modular and user friendly admin administration interface so talk about the features we have it's the MapMint instance allows to import and store the raster and raster the GIS data compose and save maps in the form of projects and you know configure and run a cartographic portal so when we speak about the special data so MapMint processes various various types of GIS formats it converts and queries the vector and raster data as explained in the previous slide earlier then one more thing I would like to talk about is the client side is one one part of the client side is that the users experience the interface built with bootstrap html and css which we see her in the map and dashboard moving towards the zoo project it is a developer community for MapMint not talking about the tech stack involved in the project we have 3js so to in 3js to generate and render 3d scenes so 3js obviously as the name suggests is built on JavaScript it is a multi-purpose cross browser 3d library for rendering models so basically users obtain dynamic visualizations of models of different file formats which we will see in the upcoming slides in details so basically as the scenes are dynamic in nature so the Docker instance or a dynamic life or a life server would be beneficial for the suitable purpose next step we have the cesium library of whose generated template code is involved in rendering various files of formats such as 3d tiles czml kml geogation and so on so again cesium is an open source cross platform tool for developing various 3d geospatial applications which we use in the project for a template code so within a few lines of code cesium generates a globe along with beautiful terrain and imagery as per our choices we can make changes in the code then one more feature mentioned in to be mentioned here about cesium is the rest API which focuses on conversion of different data types into 3d tiles which is supported by cesium and using this we can add that data in the asset dashboard of cesium to obtain the visualization so now talking about the data formats involved in this we have classified we can classify them basically into vector raster tabular data and documents we have different types such as 3d tiles gltf format imagery quantize mesh czml kml and so on so so these talking about these data types involved dealt by cesium as the back end of the project for implementing 3d visualizations so these can be seen on the screen or on the web browser so every data set has a different extension which is indeed which indeed determines the quality of and the data points involved in the same along with the time taken to render such data for dynamic visualizations sounds cool and yeah so that was about the back end part we have involved along so that was about the back end part involved along with the zoo services to create the functionalities so we have cheetah templating system which is somewhat similar to similar with with a few modifications like adding hashtags before keywords so again cheetah is an open source generate template and a code generation tool written in python so cheetah can be used with itself or can be involved with other programming languages like javascript css html and so on so talking about the loaders which are used for rendering the models are taken from 3js so the most famous ones involved are gltf loaders point line point cloud loaders and obj loaders along with mtl so mtl is basically one more format which is generally used with obj for adding quantized mesh and more detailing to the models then again for again a loader we not exactly a loader but a generalize ccm template is can be a suitable option because we have a globe involved and the user experience involved for the same is quite better than 3js considering the globe part so yeah so this so that was about the back end technologies involved in the project so we have one more part which comes in which which is a which is an important part in the software development processes writing documentation support the work and technologies involved right so and one thing so the documentation is involved about the work done about suppose setting up the environments development environments installing specific modules setting up or maybe you know how does this this technology work and so on so one more thing I really liked about I would like to add into the presentation is the weekly reports culture which we follow during the coding period was very much beneficial as a student because this involved in adding keeping ourselves a punctual every time and we had discussions regarding mentors about the documentation with mentors regarding the documentation and about project related tools whether to use which two tools so whether it was a specific library for python or whether it was about creating services WPS WMS WFS so that was about documentation reports and technologies involved in back end and front end so I would like to acknowledge my mentors for giving me an opportunity to contribute to the project so and also to OSU organization the community members and my friends for being supportive and kind throughout the journey so yeah that's that and more information as everything you're involved in the map meant in the in the map meant or 3JCGM all these tools are open source and we can just you can just have a look about those tools and technologies from your the links are over here so that's that and thank you so much for giving me the opportunity to have a speech here at force for you that's really cool so thank you and I'd be happy to share in answer any questions. Thanks a lot Harian the interesting presentation it's your first time at GSOC. Yeah it was my first time and what do you think you will apply again with the same project or some different project so I'll continue contributing to this one and I will look out for different repositories too because open source is good I mean it's really amazing. Okay good thanks a lot and see you around. The next student is Anik Jiri from India Institute of Technology Bombay and he also is going to present a project with the map meant so hi Aniket and the. Can you share can you see my screen yeah perfect can you I know sorry yeah this one can you put on the okay slide mode okay yeah perfect so enjoy the speech. Hello to all the community members present hello to all the community community members present or here myself Aniket first of all I would like to thank for providing me the opportunity for presenting my work which I have accomplished during my two months period during my GSOC period so I would like to start with my project title which is implemented 3D simulation support using CZM and integrate with MathMint. First of all I would like to describe myself I am a grad student in UN from Mathics at CSRI in IIT Bombay I have done my BTEC from computer science and currently I am a member of MathMint community so the content which I'll be covering during the course of this presentation are introductions how was the MathMint before GSOC 2021 and what are the updates that I have implemented in MathMint during GSOC 2021 and also I would like to share my experience working with the community and what are the future opportunities that can be implemented in the project in the updates that I have implemented during GSOC 2021. So I would like to start my introduction with the MathMint as you all know MathMint is a geographical information system which is software running on a web server it is made to provide the software publishing cartographic portals and dynamic applications it also provides numeric GIS capabilities and also let the users accomplish the various tasks such as import and store GIS data both vector as well as cluster data configure and generate WGS applications as well as configure and use GIS portals as well as access and share maps so this is the general overview page of the MathMint dashboard as you can see there are various steps available in the MathMint dashboard where user can interact with them where user can upload their own maps and visualize their own maps and provide new layers in the maps. So the basic idea for this project is which will be benefiting to the community is nothing but with the 3D sync visualization as we all know that it allows us to view data in three dimensions which provides the user a new perspective for example as we can see instead of inferring a valley space from the configuration of contour lines but with the 3D sync visualization user can see the valley and perceive the difference in the height between the valley floor and the ridge so we can see that 3D viewing can provide insights that would not be really possible from the same data's georeferenced map so this is the major application that we can see as compared to the 2D maps and the 3D vision visualizations. So what is the MathMint? How was the MathMint before this software? So MathMint is a complete web mapping platform as we all know and has support for various features so one of the main important part of the MathMint is the zoo project which is an open web processing service platform that supports MathMint to run various applications by acting as a software data infrastructure for MathMint. Currently earlier MathMint was capable of processing the georeferenced image array out of the box since there was no feature provided to the user for viewing the data in three dimensions as we all know which provides the user a new way of analyzing the data it provides insight as we all know that by viewing the three data it provides insightful details through better visualizations. What are the updates that I have done during what implemented during my Geosoc 21 period? So I have implemented a tab in the existing MathMint UI as we have already seen the tab on MathMint UI which allows the user to visualize the data to upload the 3D data on their own and visualize the data so that they can get insights from the 3D data. So this is the tab that I have implemented in the existing MathMint UI. So here what user can do is user can upload their own 3D data and then they can visualize it with the help of the feature that I have implemented. So currently I have for the sample I have already uploaded a same.gltf file so this is the visualization output of the visualized after clicking on the visualization button this is the output that we will be seeing for the same.gltf file. So for viewing I have used the CGM-GS API which will be useful which will be an open source JavaScript library which allows for viewing the 3D simulation on the web. So the reason we have used the CGM-GS API because as it was performing very good as it provides precision visual quality and it was easy to use and it also allows us to visualize and analyze data on high precision WGS84 block. So I would like to share my experience working with the community. Since it was my first time working with such a huge organization this working with such a huge organization helped me to increase my skills and also provided me a lot of experience about how open source community works, how they contribute, how they merge the requests, how they pool the requests. So it also helped me working with the community also taught me about the various open source technologies and the data, how the data is stored, how the geospatial data is generated, how they are stored. So the future possibilities that we can provide with this are generally there are endless possibilities for what we can do with the visualizing the 3D data. So adding more support based on the requirements of the user we can add more support which can improve the task of visual interaction with the 3D data. Presently I have just implemented the visualization support but functionality suggest styling and filtering the 3D tiles will allow users to highlight essential features from the dataset and which allow the better perspective and understand the data better. So I would like to thank all my mentors for providing such a good opportunity and helping me during my GSOC period. And lastly I would also like to thank all the community members for allowing me to be part of such a welcoming and interesting community. So for I have provided the links for more information. I have also provided the link for my GitHub presentation also. So for this I would like to share my one tutorial which one demo that I have created for my task that I have implemented. So this is the type I have created here. When I will click on this tab I will upload a simple image, simple reality of object file. And when I will be seeing this, when I will be clicking on the visualize button I will be able to see the 3D data on the CGM-DS API that I have implemented on the client side. And for add-on where we can implement various features on this 3D data where we can select a particular feature on the map, while we can select a particular 3D data on the map so that it will be helpful for the user in the further purpose. So thank you. So thanks a lot Aniket, a really good job. And how was your experience? It was the first time? Yes, it was my first time. It was a very good experience for me. Okay good. And do you think to apply again for the next year? Yes, sure, sure. For the same project as well. Okay, good. So thanks a lot and see you around. So the next speaker is Sandip Saurav, he's from Haiti in Bombay, India and he's going also to present a project related to map meet. Can you share your screen, Sandip? Okay, perfect. Yeah. Okay, sticking in a while. Okay, perfect. So good evening everyone. So I am Sandip Saurav, I'm currently pursuing my MTech from Haiti Bombay and my proposal for the ZSOC is aim to integrate Thereseen Builder as a WPS service within map meet user interface. So this is a brief intro about me as I'm currently pursuing my masters from Haiti Bombay and I did my graduation in electronics and communication engineering. These are my MLIDs and the contact thinking sites. So you can contact me. So these are the contents that I'll be covering coming to this. So let's talk about my motivation first. So earlier in the map meet, the user can visualize the 2D maps and can even generate or share or more things and do much more things with the 2D data. But when it comes to visualizer 3D models and to generate, it fails sometimes. So my main purpose was to generate a 3D model, generate a user interface to create a 3D model as well as to visualize it. So nowadays the 3D technologies in maps is an explanatory illustration that represents a scale of the real world objects. So with this motivation only, I came up with this idea to incorporate 3D technology in the map meet. So these are coming to map meet. So map meet is a GIS software on the internet that is designed to facilitate the development of spatial data infrastructures. And map meet is also for the individuals as well as the organization that wish to manage and optimize the SDS establishment and deploy their dynamic mapping applications. They are the various features and functionality that are supported by the map meet. So that you can import and store the vector and raster data, vary the database, publish the object data in the form of WMS, WFS and WMTS service. You can edit your data, you can compose and save maps, you can share your data. So these are the various features that are supported by the map meet right now. And this is a brief overview of this architecture, map meet architecture that it uses. It uses the service, zoo project, zoo, kernel and various other services and the servers. So what updates that I bring during this software in the map meet UI. So I created a widget on the map meet UI for users to easily access the 3D point cloud generation service. I created a WPS service to run 3D point cloud generation using the zoo service. And I created some volume to run the services and form and generate a point cloud smoothly. So I designed and 3D point cloud generation UI from where the user can directly load the image, run the service and also user can also download those point cloud and visualize some other software if needed. So what are the technologies used over here? So basically when it comes to technology, there are the various technologies that are available. So there were two technologies to generate the point cloud. First one was laser scanner, other one was photogrammetry. And since the laser is scanner is expensive and is not readily available to everyone. So we choose another option going with the photogrammetry. And we have implemented our project based on the photogrammetry. One of the technique that comes under the photogrammetry is structure from motion. And it is a method of establishing and estimation the motion of the camera and reconstructing the three dimensional structure of the photograph scene with the images taken from the multiple viewpoints. And it's a pipeline algorithm with a sub-task to process each and every image sequentially. And more specifically we can call it as an incremental structure from motion because the camera incrementally moves in order. So in this we have used the various types of steps that led to the final generation of this point cloud. So this is the flow chart of what I have implemented. We have taken some input images, multiple input images. I extracted some features, did image matching, estimated camera poses, did triangulation points and then bundle adjustment that led to final reconstruction of the image. For feature descriptor, there are the various feature descriptor available like SIP, CERF, KS, KS, ORB in the brisk. And there are some advantages as well as the disadvantages of the both. So I use the SIP, CERF, KS and ORB and brisk feature descriptor in this while implementing this 3D point cloud generation. Coming to feature mapping. So mapping is the process by which the various features that are generated earlier or that are descriptored earlier matches with the different points in the other images. So this is the matching that we did using the ransack. And nowadays the ORB feature is also famously used that both the features of extraction as a description and with a low computational cost. So we use the ransack as well as the ORB and it's depend on the user which one to use. So triangulating the 3D points. So triangulating refers to determining a point in the 3D space and projecting into one or more images into the 3D space. And it is a problem it is necessary to solve the parameters on the camera projection from 3D to 2D and the various cameras are involved. And in case of representation the camera matrices, triangulation is sometimes also referred as reconstruction or intersection because we are using the features from the 2D images and projecting on the 3D space and then we are triangulating or connecting each and every points. So this is the iterative reconstruction or the incremental reconstruction process of the structure from motion. We are moving it, we can move the camera, we are moving the camera, we are generating the features, we are then projecting the features, triangulating the features in order to generate a 3D model. So this is the point cloud generated from the various multiple images. This is a sample of my work. This is the mapmate user interface after I did my completed GSOC. So this is a brief overview of the dashboard that looks like now. And what are the applications? So there are the various applications that uses the 3D point cloud generation. For example, we can use the, it can be used in the construction industry for 3D model reconstruction or inspection or it is also used to measure some height and some scales that are used in various measurements. For example, it is also used as building information model and nowadays it are, they are available and nowadays point clouds are used in 3D game development as well. So this is a small presentation of my work. So we select some of the files over here that are the image files. So I am selecting 10 images right now. So I am submitting all of them to process and generate the 3D point clouds. So now the point class has been generated successfully. Now I have downloaded the point cloud. Now I am visualizing it using the mash lab. I have opened the mash lab, uploaded the PLWAP file that is a point cloud file. Now it is a 3D point cloud that is being generated. I am adjusting some of, to look at its views from the different angles. And this is a point cloud that has been generated with the 10 images only. So you can generate from the multiple images 50, 100 and 1000s. So the time limit is obstruction over here right now and I am currently working on it. So what is the learning experience? So it was a very first experience for me to contributing to open source community. I got to learn so much from this and time management was also an issue but I somehow did and also learned how to manage time between the institutional activities and different works and I did a good job while implementing my G-SOP work as well. And it was a very exciting and unique journey for me and given a chance and I would like to further, if given a chance I would like to further work on this project more. So I would like to acknowledge my mentors Gerald Pinoy, Rajat Shinde, Venkatesh Agavan, Sthisai and Samuel for giving me this opportunity, Organization OSCO and the Development Community Map Mint as well. The first project for showing and presenting my work. This is a link of my project, 3D Sync Builder as a WPS service. You can either go to this link and scan the barcode present over here. So I would like to give a brief demo about one of my friends project. He wanted to join us but due to some health issues he couldn't join us. So his project was integrating a 3D module to 3D scan a house within a map mint application. So this project allows a minimalist 3D scan taking multiple pictures, recording gamut positions using the open down map to rebuild 3D scene. So with the house faces then load the model in 3D as well as export the data back to map mint 3D viewing. This is the link of his project. You can scan the barcode to get to the project of PIMPS. And this is a brief overview of the project, what he did in the first phase, second phase and the third phase. Thank you. I'm open to question now. Sandeep, thanks a lot for your presentation and also to bring the project of your friend that it was not, it cannot be here. And so it was also for you the first time. And I read that you will try again to join the community and you would like to contribute more. So thanks a lot. Yeah, there are some work that are left. I would like to finish it and once it's taken out, I'd like to finish it all completely. Okay, good. Thanks a lot. Thank you. So the next speaker is Saurav Singh from the Indian Institute of Information Technology in Nagpur and he's going also to present a project to MAPMint. Hi, I'm Odeon. Yeah, yeah, you are online and can you share your screen? Yeah. Yeah, perfect. No, you close it. Can you open again? Yeah, yeah, just give me a second. Yeah, sure. Is it visible? Yes. Okay, fine. Can you move in presentation mode? Yeah, yeah, yeah. Thanks. So the stage is for you. You can start. Okay. Hi everyone. So this is Saurav Singh from IMA third year under guest student at Indian Institute of Information Technology. Hi everyone. So this is Saurav Singh from IMA third year under guest student at Indian Institute of Information Technology in Nagpur, Maharashtra, India. And in this summer I contributed in Google summer of 2021 in the OSG organization and with Unity 3D I'm integrating it with the present MapMint 4ME app. So what is MapMint 4ME? MapMint 4ME means a map mint for major and evaluation. MapMint 4ME is an Android application for the MapMint web services basically. And it's an Android application built on the top of Jue project. So idea about the project was idea basically about announcing the augmented reality experience which was present in the MapMint 4ME previous year G shop student had did. In that period the AR experience was built in the native Android application and written in Java. So when I was in search of the projects I saw that this AR experience is written in Java. And so the first question that strikes my mind is why in Java only? Why isn't any 3D engine like 3D or you can say Unreal Engine and many other 3D softwares are there, aren't used for this. So I thought for that I contacted the mentors and briefed them about my idea like converting this project into the Unity 3D. Because why Unity 3D? It has more extension to do, more things to play with, more things to grow your project and more things to add up. So what I did in this period? I did an AR draw plus major experience in Unity 3D and add that Unity project as a library to the MapMint 4ME Android project which was written in Java. So basically what is AR draw? It is an AR experience which just let you draw the simple lines using the line render in 3D space. By touching your phone screen, if you are using a mobile phone, while touching your phone screen, it just let you draw the lines over in the 3D space. What different we had did in this G-Shock project was previously AR draw is known for drawing the lines in the 3D space only. We got the idea from the MapMint 4ME app. Why not AR draw feature can be used for drawing the plane, building planes, like drawing the lines on the horizontal and vertical plane to measure the length of any object or any other things. Or can be used to the future idea of this to draw the building planes and store it on the servers and share it, use it in the future. So what different we did in this? In AR draw, we are drawing over the recognized space in the horizontal and vertical planes. When we launch the AR experience, it will just recognize the horizontal and vertical plane in front of the device and you. It will let you draw the lines over the space. There are multiple features which we enable for this experience. One of them was the line width adjuster. Here you can adjust the width of the lines which you want to draw. It can be 0.2px, 0.4px. It can be set according to you. One of them was a video recording while drawing. You can record a screen recording in the app while you are drawing the lines in AR. Another option is the multi-color option where we can draw the lines of multiple colors. It can be thousands of colors where we just add up the line color dropper in the app. One more thing is we can draw with the multi-finger touch ability tool. We can draw not only with one finger or two fingers, we can draw with two fingers and three fingers. All these are the capabilities which we added in this project. Let me show you the file. This was the UX where it will recognize the plane and tell you to touch and draw. Here, if you see where I had drawn one line on the horizontal surface, it will tell me the length of the line which is very... I don't think it's possible to see. It's a 0.68 m. I have calculated and double-checked it. It was somewhat accurate, kind of 0.02 m difference in the actual length. I was quite fascinated by the results. If you see, right now I have changed the line length 0.6 pixels and the difference between the line width. Here is the color dropper feature where we can select the color of the line which we want to draw on the horizontal and vertical surfaces. This feature is recorded by the in-app feature. This was the work which I did in my G-Shock period. This project has multiple opportunities in the future to do. I am thinking for more ideas for this project and keeping top with my mentors and trying to get some good ideas to make it more better than this. Hi, Sarov. Thanks a lot for your talk. It was your first time for G-Shock. It was a good experience for you. It was a wonderful experience for me. The weekly reports and the report we did to the mentors and the top we did with the mentors. It guided us a lot and gave us a great experience for the software developers. We talked about the software development environment which we needed to explore. We needed to know more about what we got from the G-Shock period this time. Do you think you will apply next year? Yes, I think so. See you around and have a nice day. Thank you. Have a nice day too. The next speaker is Katelyn Adrich. Hi, Katelyn. She is from North Carolina State University. It's a university that we know quite well. She is a geoforoll lab and there are some people that are geocharter members and also in the board. Can you share your screen? Thanks for your work. I was more because it's a bit that I'm not developing so much, but I'm part of the grass community. I'm really happy to see several grass projects now. Hopefully you can hear me all right. My name is Katelyn and hello from North Carolina USA. I'm a second year PhD student at NC State University. This is my first year doing Google Summer of Code. My project was on improving the integration of grass GIS and Jupyter notebooks. Grass GIS is a software that has been under continuous development since 1982. It has been around for a while now. There are lots of ways you can use grass GIS. Probably the most common way is through the graphical user interface. You can use it through the command line. There is a bridge to QGIS. You can call grass modules from QGIS. You can also script in grass using Python. There is even an R API add-on. There are lots of different ways to use it. Jupyter notebooks, we are all as programmers probably familiar with. It's an open source web application that lets you create and share documents that contain live code equations. You can do inline visualizations and narrate your code with markdown. There are a lot of reasons why we want to integrate grass and Jupyter. Because Jupyter notebooks are so popular and such a great communication tool. There are several things that don't feel super smooth when you go to use grass in Jupyter notebooks. The first is that the session handling isn't that great. There are a lot of environmental variables that you have to set every time. The bigger one is that there are limited rendering options. When you use a GIS software, you would probably expect to visualize your data to interact with it. Zoom, pan, toggle between layers. Right now, before my Google Summer Code project, the way that you would view your map was to first erase any display. Then you would erase any legend files associated with it. Then call display modules, write it as a PNG image, and then display the PNG image in line. For my Google Summer Code project, I wrote a sub-package for GrassGIS called GrassJupyter. That improves the lives of GrassGIS, but it also, in session handling, provides two new rendering classes. It's currently merged in the main GrassGIS repository. If you install the dev version of Grass, you're also installing GrassJupyter. It will be officially released with Grass8. That will be an experimental release because development is ongoing and there are lots of areas that we're still working on improving. The first, the small thing, you can now shorten the launch of Grass, of your Grass session in Jupyter with gj.init. Then the first class I have for displaying non-interactive renderings is the GrassRenderer. The GrassRenderer class uses an API that's very similar to the display library modules. Instead of calling d.rast to put a raster on your map, you call it d-underscore-rast. In addition to being more intuitive, one of the advantages of using this is that you can have multiple renderings going at the same time. In the previous integration, you had to be very deliberate or change how you wrote the.png file if you wanted to have multiple, start an image, make another rendering, and then go back and modify the first one. In this case, all the renderings are written to a unique temporary file. You can have multiple instances or multiple renderings going at the same time. The second and more exciting and complicated rendering option that I worked on this summer is Interactive Map. Interactive Map uses Foliom, a leaflet-based library for Python. Foliom lets you zoom, pan, and toggle between layers. It also comes with these nice tiled background maps. As it's written now, you can add rasters, vectors, and a layer control option. You can even export as HTML if you wanted to put your map on a website or share it with others. Interactive Map took me the longest because Foliom only takes data in WGS84 or in WGS84 pseudomarcator projections. In order to move things from the current GraspMap set, which probably has a different projection, into Foliom, first we had to create a temporary map set and reproject any data that you add to the map. Then we could export the vector data as a temporary GeoJSON file and import that to Foliom. For rasters, Foliom supports PNG overlay images. It doesn't directly support Geotifs. We had to provide Foliom with the bounds in WGS84 and a PNG image written in the pseudomarcator. That requires another temporary map set. We reproject the raster into a second temporary map set that's in pseudomarcator. We can export that as a PNG image and then finally import it into Foliom. This took me the longest of the summer to figure out how to do. The final thing that we worked on this summer was adding binder support to the main Grasp repository. Now if you go to the ReadMe on the Grasp GitHub page, you'll see there's a Launch and Binder button. Binder is a cloud-based computing environment that is shareable. If you click the Launch Binder button, it will bring you to the latest build of Grasp.js operating in the cloud. You can try new functions or sub-packages like Grasp.Jupiter. If your friend pushed a new module and you want to check it out without installing it on your own computer, you could check it out through Binder. There's lots of future work that I'm still working on with Grasp.Jupiter and hope to continue working on. Even this fall already, my mentor has written Grasp 3D renderer which creates images like I've shown here. We're still working on the session handling within it, having it end the session without needing to call finish. Then, Foleum, as I mentioned earlier, I think there's still a lot of room for improvement there, adding options to click on vectors and access attribute data to access more of the Foleum library, do more things besides just simply adding a vector or a raster. Grasp is known for its time series visualizations, so it would be neat to have some time series visualization techniques specifically for Jupyter Notebooks. Finally, integration with other libraries like Geopandas so that you could display your vector attribute as a table or something. I plan to continue working on that and I want to thank my mentors and all the program administrators. This was a wonderful experience and I learned so much. A huge call out to my primary mentor who spent a lot of time getting me up to speed and teaching me. So thank you all. I'd be happy to take any questions. Thanks a lot, Kathleen, really good project and good improvement for Grasp. So can you tell us something more about the experience that you had and if you think to apply again next year and what else? Yeah, I might apply again. I am definitely planning to continue being a part of the Grasp Dev community and I'm excited to work more on Grasp Jupyter. My lab group at NC State is involved in Grasp development so it fits well with my dissertation work and research. So you will be seeing more of me there. Okay, good. So thanks a lot again and see you around. See ya. Next speaker is again a student for the Grasp genius project. He is from National University of Singapore and he's a hard on so his project was to parallelize the module for Grasp. Yes, so please, Aaron, can you share your screen? All right. I don't see the screen. Sorry, we can hear you not well. Do you have several tabs open on the browser? Can you hear me? Not really. Can you hear me now? Not really well. Sorry. Can you turn on? I can put the slide, I can see them, but I cannot hear you really well. Do you want to try to switch the browser, maybe? Maybe Aaron, maybe we will switch with Linda that she just arrived and we will do later after Linda. Okay. So sometimes the connection is not good. So we will move to another speaker. She just arrived and she is Linda from University of Prague. Sorry, Linda, if I put you on the stage without any advice before, you are online. So there was some problem with Aaron and you are ready for the presentation. So just one moment, we will try to get some, to get one of the two students back. Okay, so Linda, are you ready now? Okay, perfect. Thank you. Sorry, but there was some problem with Aaron and so since you are here, we will start with you and after we will move to Aaron. Okay, so I'm in the studio. Can you share your screen? Sorry, I need to share. Sorry, sorry. Yeah, yeah, perfect. Great. Okay, so the stage is for you. Yes. So I'm a PhD student from Prague from the Czech Technical University. And I would like to tell you something about my Google Summer of Code project called First Steps Towards New Grass GIS Single Window GUI. So first of all, we will talk about the state of art before Google Summer of Code. Then we mentioned some project goals, the state of art after Google Summer of Code and also the next steps for further development. So how did it look like before the Google Summer of Code? It looks actually the same still, but I worked on this in the parallel environment and it's not ready so far. But we will talk now about the state of art before. So the basic, the grass GIS has actually two windows. It's the window GUI. And the first window is the control window and you then can have additional separate map display windows. The control window contains a notebook with five tabs in the standard to the map view. It's data, display modules, console and Python, and you can add also 3D view tab. So it is state in the version 8.0. So our project goals, the first goal and probably the main goal of the project was to do the necessary factoring to prepare grass GIS for single window GUI. So to make necessary changes in the code, in the WX Python code, which is used for gooding. And then the second task or goal was to make a really very simple single window GUI with really very simple base functionality. So the state of art after Google Summer of Code is as follows. We have the single window GUI prototype coded in a parallel script and it's not merged yet into the development version because we still waiting for grass 8. And we will then we need to do this thing and then we will make the thing related to the single window GUI. So I think that, well, it's functioning. I have some screenshots I will show you. But it's not like it has just the simple functionality. So the things that are also that also works for multi window GUI, you can expect that it will work also in the single window GUI. But we also want to have some other things, some special thing that are not in the multi window GUI. So most of the basic functions are functioning. But to really provide a really user friendly environment, we need to make many other things. And also it's not possible to try the single window mode yet. But I think it will be possible soon. So here you can see some screenshots. So it's everything based on the Google paints. You have those five or six paints and then there's the center map display notebook. You can minimize paints. You can move paints. Also, you can split the notebook into two separate windows or whatever number of map displays you have. And I think in this moment it's very important to say the next steps, the future development because it will meet many functions. So there are some general things. We are planning, but it's very important thing, the first one, because we are planning to have the multi window GUI inside the single window GUI basically. So everyone who is used to use the multi window GUI can make, should be able to make it somehow from the single window GUI. So the map display notebook tabs will allow user to be undocked into the separate window. So then the user could can move the display to the second monitor, for example. So it's the first very important thing. And then there are some things related to checking work spaces or there are also the things about console pain. It's rather the things related to better widget organization or nicer appearance. So for example, we would like to have a nicer appearance for dark mode because some parts are ugly. And also we need to change the organization of 3D panel pain and also on the console tab probably. And also there's problem with the status bar as you can see here. It's not visible properly. So it's also something needed to be repair. And so basically mainly the widget organization and nicer appearance. Then we have one thing which is also very interesting. And it's that the user will be allowed to choose a convenient layout of widgets or to create their own layout. And it should be possible through the perspectives. So it's something that could be part of the new menu called view. So that's everything. Now, thank you very much for your attention. So Linda, thanks a lot for your presentation and the hard work that you did. Yes, thank you. This is the second year for you. So you cannot apply again if I understood correctly. Yeah, I cannot. Do you think you became a mentor or something like that? I will see. But I'm planning to continue to work on this topic within my school project. And so I would like to continue on single-video GUI. Makes sense. I'm not sure if I would like to be a mentor, but probably yes. I'm not sure if I'm a good teacher. You need to help the people to finish their work. Yeah, but now it's better if you finish your work and it's more important right now. Okay. Okay, so thanks a lot. Now we will try again with Aaron. Can you hear me better now? Yeah, perfect. So I will share you and the stage is all for you. Thanks, Luca. So my name is Aaron and I'm currently a junior student in National University of Singapore. And my project title is, parallelizing raster competition in grass with the open MP framework. And before we start, I would like to give my thanks to my mentors, Hwede, Wysheck, Maris and Anna for the guidance and also the OSGO and grass community for being very welcoming. So first of all, just a very brief intro, what are rasters? So rasters are actually just any files that are sort of pixelized and each pixel or cell contains certain value and this kind of files, usually we call it rasters. So an image file like PNG is a raster. And raster competition is essentially we have some input raster map and we sort of have some function F that transform this input raster map to either like some statistics mean median or we output another raster file taking in this input file. And so for example, like one of the module called RUnivar is like the name suggests is on univariate statistics. It generates some crucial statistics on the input raster files. And also for the output type that for example, we have our neighbors, which for each cell in the output file, it actually takes the surrounding neighbor, neighboring cells and sort of take the mean and sample it to the output, new output file. And so grass has many, many such modules that does raster computation. And the inspiration for this project is like, if you are familiar with Morse law, essentially what he says is that computational power, which correlates heavily with the number of transistors you can feed on the microchip, it actually doubles every two years. And as you can see, it's a linear graph and on the Y axis is a logarithmic scale. So just fun fact is like this past May IBM actually claimed that they have managed to make a two nanometer chip. And for context two nanometer is like the size of a DNA molecule. But there's a certain limitation on how far we can go by simply reducing the transistor size. So what else can we do to improve the computation? Perhaps you can explore different computational models, which is the current trend right now. For example, the neuromorphic computing and stuff, or we can explore like different material designs. But today we want to talk about parallel programming to speed up the computation. So the framework that we choose is called OpenMP and it's a very lightweight framework to enable parallelism. And the alternative to this is standard for joint model, which actually create a separate process to do the computation. And a better comparison to OpenMP would be something like a POSIX native library, a thread library called P thread. And another framework which is called MPI. But there is more for distributed architectures and it focuses on like computation between computers to segregate the work done. So as you can see on the right, it's actually quite simple to like parallelize a certain region. For example, if we have some for loop that does some very computational intensive work, just by plugging the one line, the product line, you can actually essentially enable parallelism. And on the diagram below, you can see that this is the model on how OpenMP usually work. You have a master thread and when you enter a parallel region, it creates like multiple threads to run certain work. And also the parallel region can be nested as well. So what's the goal of the project? It's really very simple. Like we just simply want to speed up the computation. Imagine like working on an input raster like 16 billion cells, like it may take up to two plus hours. Essentially, it will be good if we can reduce it to like simply 20 minutes. Right? I'm sure that's good. But we don't want to do that without sacrificing the correctness of the algorithm. And also we want to maintain the previous behavior. So we still want to keep the memory usage and the disk usage in check. So a very simple look at how the module generally does the computation. Imagine if we have an input raster file of r rows and c columns, r times c. And essentially the workflow is like this. The master thread will actually read some rows into the sum buffer in the memory. And it will do some computation on this buffer and put the result on the output row buffer. And after which it will write from the memory to the disk. And it will repeat this sequence like r times. On the other hand, there are some modules which completely transfer all the input raster into the memory. After which it will dust the computation and after which it will transfer the result of the computation back to the disk. As you can see on the right, there are some sequential bottlenecks. In the sense that when we write to the disk, if we are writing on the 50th row, we need to make sure that the first to the 49th row is actually already written before we can move to the 50th row. And this actually brings some consequences. Basically for the first type, it's actually not easy to parallelize. Because the thread needs to wait for each other's work to complete before they can move on to pick up the next row. But on the second type, it's actually very easy to parallelize because we can just parallelize the read section and the computational section. And just leave the write section to be sequential, totally sequential. But the key difference is the first type will incur very low memory footprint, almost negligible. But the second type, essentially you need to transfer the, if you have an uncompressed 16-git raster file, you need about 30-plus gig on memory, which is a lot. Most of us probably have 16-git of memory or RAM. And of course the last type will be just some statistical output, which is actually quite similar with the first one, which has very low memory footprint for the module R-Univar. So I tried two different attempts. And the first attempt is for the in-between the sequential write, I actually put some sort of a temporary file buffer on the disk. So every thread will write on this temporary file before transferring to the final output map. But the thing with this is that it is very dependent on the user's like disk write speed. For example, if you have HDD versus SSD or NVMe SSD, then the write speed will differ. And also, even though we can have very low memory footprint, there are very high disk overhead. And this, essentially you need to use up a lot of disk space. The second attempt is to split the work done into chunk. So on the output file, you can see that it's split into like five chunks. And we will essentially open a new parallel region per chunk and allow and essentially increase the output buffer from a row to a chunk. And the threads will independently write on this chunk altogether. And after the buffer is filled, we will write sequentially back to the rest of the file. And we will repeat this loop for every chunk. And the good thing about this is we can maintain the previous behavior simply by setting the chunk size as a row. And it's very flexible as the users can specify exactly how much RAM they want to allocate for this process. And yeah, so this is our final choice of algorithm. And the result is actually quite promising. Like on the left, you can see that the R-Nebel module with the window size of 15, it takes about two hours and a plus on a 16 billion cells for a single thread run. But if you use about 12, 13 threads, you can actually reduce it down to like 20 minutes, which is good. And what this graph tells us is actually by using larger chunk size, it actually doesn't lead to better performance, which makes sense. Because as long as you allocate reasonably large buffer and there's the overhead of opening new parallel region is kept reasonably low, it won't actually affect performance because most of the work done is actually on the competition. And also, the performance is very dependent on the competition to IO ratio, which makes sense as well. So also, actually it turns out that the first approach and the second approach have very marginal difference in performance. But we still prefer the second approach because the first approach requires extra disk space, which is not elegant. So these are all the modules that have been parallelized and will be merged into graphs on release 8.2. So what's next? We can parallelize more RESTor modules and more popular one like RMET Calculate. And we can start working on like 3D RESTor modules, which is like just slightly more complicated. And also one thing that I didn't mention is that for the statistics output type modules, right? Actually some floating point discrepancies because the summation order of the floating point is different now that we implemented the parallelization. The output is actually different. So this can be solved by implementing some floating point discrepancies reduction algorithm like Khan summation algorithm. I'll be working on this after this. And thank you for your time and I will open it up to any questions and you can check out the project right out here. That's all from me. Thanks a lot, Aaron. Impressive work, what you did. And yeah, no question from the audience and there are some comments that someone is already testing in a real live application and they are really happy. So it's a good answer. It was your first time in Google Summer Code. Yeah, in fact, it's my first open source project as well. And how was the feeling and it was amazing because like I get to work with very great mentors and they give me quite a lot of good advice and I've learned a lot. Okay, this is the most important part. Do you think to apply again? Yeah, I might consider our but I would definitely continue to contribute to the same project. And maybe work on more modules, but I would think to apply it again. Okay, good. Thanks a lot and see you around. So, now the speaker of GSOC program are finished. And we have more to two speakers that are coming from the UN challenge. The first one is Patrick Hupp is an engineering PhD and now is working at the Pontifical Catholic University of Rio de Janeiro. Hi Patrick. Hello. So, this slide is already there and the stage is all for you. Thank you for this. So, good morning everyone. Today I will talk about the UN OSU educational challenge. That's about the training on satellite data analysis and machine learning with QGIS. So this is a high level presentation, a very quick one. So if you have any questions later, you can contact me and so on. So Luca already introduced me, so I don't need much more than that. So just say that I also remember of the two sites ready to remote sensing. IASPRS and the IEEE GRSS. And my main research topics are about image analysis, computer vision, remote sensing, machine learning, deep learning and cloud computing. So basically the idea of this challenge was that satellite imagery is becoming a trend, especially to generate geospatial information due to many open data source available nowadays. So the M was to prepare a tutorial about the functionalities in the QGIS platform and also plugins for processing satellite data. So this is just the title of the tutorial, it's machine learning with Earth observation data, cases studies with cement segmentation and regression. It's basically a hand-drawn approach based on two exercises that address different applications related to machine learning using satellite data and also the QGIS plugins. So both applications are related to climate change with, I believe, very important topics. And since I'm from Brazil, I also choose plugins related to South America. So the tutorial was developed in Sphinx, so it can be easily generate HTML files, PDF files and so on. And here is how it looks. So if you're assessed by a browser, for instance, you can navigate easily to the chapters. It was developed and tested for the current long-term release, at least it was the current long-term release when we started. And it was also tested on different operation systems. And all necessary data is also available on an original repository. So the first exercise is a supervised change detection. And the application that I chose was monitoring the evolution of the snow-capped peak of the Huascara mountain in Peru during the last five years. So the methodology goes from downloading the email to creating the dataset, so we have to download, click, preprocessing and also collect samples for training. Then we apply cement segmentation generating a classification map for each date. So in this case, we are using random forest classification, but we could use a different machine learning algorithm for that. And in the end, we also compare the classification maps to create a change detection result. So basically we use two plugins for that, the SCP that have a lot of tools. So it basically enables us to download products like Sentinel data, Landsat data. And we can also, in preprocessing, post-processing, make some reports. So it's a very nice tool. And also the Citzaka classification tool that we are using in order to train and perform the classification. So here are some results, a summary that we are calculating and measuring how many pixels that are related to the ice for each year. And here we can also see the emits and the generated mask for each year. So here are some, just two examples, and how we can do the change detection. So the red dots here, the red polygons here are when the snow cover are diminishing and the green ones are when snow cover are expanding. The second exercise is based on linear regression, and the application is monitoring the deforestation trends for some phallic sloshing glue in the state of Para in Brazil during the last four years. So again, we start creating the data set, we have filtered bounds, we also add some bands like NDVI, and then we apply the regression in order to create an NDVI trend. The plugins used here are the GE time series explorers, to make analysis, and also the Google Earth Engine and PyQ GIS to work with the data and also applying the linear regression. And here are the results of this part. So the producer trend map is on the left, and we can also compare with some map, for instance, the one generated by the global forest watch map, and actually it's better than I was expecting since this is a simple example. And the red dots on the maps correspond to the deforestation increasements. So, the user will be able to understand and work with some machine learning and satellite data processing tools, so they can be able to download, pre-process, clip, and satellite the image. They can create a data set for supervised machine learning, performance, implementation, linear regression, or different machine learning necessities, and also highlight the results, measure, quantify, change from different dates. It will be published soon on the OSGU website. And just a note, the results that I show you, and they are part of the tutorial, are just preliminary results. So we focus more on how to use the tools than on the accuracy. I would like to thank the mentors that are Maria Brovelli, Cristina Brinciano, Koon Fan, and Zongzi Chen, and also some contributors that helped me to test the tutorial, like Pedro Diaz and Jorge Parides. That's it. Thank you for your attention, and if you need something, you can also contact my email. Patrick, thanks a lot for your presentation, and there are no questions from the chat, but I have a question. Is it online? Is it a documentation? Yeah, it will be online soon. It's not yet. The mentors have to check everything. Maybe we have some chance, and it will be online, I believe, in the next month, probably. Okay. And it was a good experience for you. Did you learn something? Did you add some... Yeah, it was a very interesting experience. I already did some workshops and some tutorials, but this was a different one. I never did one for QGs, and it was very nice to see also the plugins I have to learn, some of them that were not so in-chewed. So, yeah, it was a very nice experience. I hope I can collaborate with the community with that. Another question. Do you use Sphinx because also the QJS documentation is done by Sphinx, or it was your own choice? No, no. Actually, it was something that was already on their plans. Actually, it was my first time using Sphinx, and I thought it was a very nice one, too. And it's very easy to use, and you can generate a very simple page, for instance, and it's easy to change, it's easy to maintain. I really like to use that. Okay, good. So, thanks a lot, and now there will be the last speaker that is Swapin Yoshi. It's a student of in-jewel formatics at the Internet, the EAT of Bombay in India, and his keen interest is in open source tools, GIS, and artificial intelligence. He's also part of UN Challenge, and he will tell us something more about his project. Hi Swapin, can you switch on? Hello, Jorga. Alright, and can you share your screen, please? Sure. So, I hope my presentation is visible. Yeah, it's visible, and it's really good. So, the stage is for you. Yeah, thank you so much, Luca. So, hello everyone, this is Swapin Yoshi, and today I will be presenting before you the work I had done in the OSGO UN Committee Educational Challenge 2021. So, the title of today's presentation will be Achieving Sustainable Development Goals with PC Routing. So, before I start, I would like to give a very special thanks to all the mentors, that is Vicky, Rajat, Timur, and Serena, for their continuous guidance and support. So, yeah, let us begin with the presentation. So, first of all, who am I? So, as Luca mentioned, I'm a grad student in geoinformatics at the Center for Studies in Resources Engineering in IIT Bombay, India. I'm also an urban planner and a member of PG Routing Community. Recently, I was also a winner of OSGO UN Educational Challenge. So, that's all about me. Now, let's go ahead with the details of the challenge. So, this challenge was organized by UN Open GIS Initiative. So, we will understand a bit about UN and Open GIS. So, UN stands for United Nations, and it is an intergovernmental organization which was founded in 1945, that aims to maintain international peace and security, and develop friendly relations among the nations to achieve international peace. The UN Open GIS Initiative aims to identify and develop an open source GIS bundle that meets the requirements of the UN operations of both peace building and peacekeeping. So, this workshop is for PG Routing, and as many of us know, PG Routing is not only useful for routing vehicles or cars on the road, but it can also be used for several other purposes, such as analyzing the river flows, analyzing the water flows. And analyzing the connectivity of the electricity network, etc. We'll be delving into more details as we go ahead. So, in the above context, the challenge here was to create workshop material for PG Routing with exercises that would help achieve the targets of the UN SDGs. So, what is SDG? SDG stands for Sustainable Development Goals. So, the sustainable development goals were envisioned by the United Nations. These are the 17 interconnected goals that were adopted in 2015. This was a universal call to all the nations to take the action to end poverty, hunger, and to protect the planet from over exploitation. The 17 goals are integrated in such a way that they recognize that action in one domain will affect the action in others, will affect the outcomes in others. So, the aim of this challenge is to expand PG Routing workshop to cover three of the UN Sustainable Development Goals. So, the three sustainable development goals here are. First is the third one, good health and well-being. In this sustainable development goal, the exercise which was done was estimation of population served by the hospitals. The second goal was SDG 7, that is affordable and clean energy. And the exercise which was done under this was optimizing the electricity distribution network. And the final one is the SDG 11, that is sustainable cities and communities. And the exercise done here was, is the city getting affected by rain or not? So, in this short presentation, we will be looking at the third exercise. So, as you all know that the world is increasingly urbanizing. More than half of the world resides in the cities. So, this makes it very important for us to take care of the cities and health of the cities when there are disasters. So, this sustainable development goal aspires to make the cities inclusive, safe, resilient and sustainable. There are several targets of this goal out of which the two targets are 11.5, that is reduce the adverse effects of natural disasters and 11.B. That is implement policies for inclusion, resource efficiency and disaster risk. So, this exercise will focus on these targets. So, the flooding may happen in a city because rain happens at some another place. So, the rain may happen at a certain distance point and water may flow through that river or any waterway connecting to the city and the city may get flooded. This makes it very important for the cities to remain alert when there is a chance of disaster like floods or flash floods. The local administration should know if the city is going to get affected by the rains which happened at some other place. So, that they can raise a flood alert among the citizens. So, this problem, I mean this exercise will solve one of such problems. So, the problem statement we have at our hand is to determine the area where if it rains, will the city be affected or not. The core idea behind this is that if it rains in the vicinity of a river which connects the city, then that city is getting affected. So, now let's see the methodology which we are going to follow. First, we will be choosing a city. Second, we will be getting the data of rivers. Third one would be creating the connected components of the rivers. The fourth one would be creating a buffer around the city to get the idea of the proximity. The fifth one would be intersecting the city buffers with the river components. And sixth one would be creating a buffer around those river components. So, we will go deeper into each step. First one is choosing a city. So, for this step, we are choosing the city named Murshigand from Bangladesh. So, why have we chosen this city? So, this city has multiple rivers in its proximity which makes it an app location to demonstrate this exercise. To define its location, we use PG routing and we use its latitude and longitude to store it as a point in a table. The next step is to prepare the data. So, to prepare the data, first we have downloaded the data from OSM. We choose the area and we download the data using Overpass API or directly using export. After that, we use OSM to PG routing converter which is a command line tool that inserts the data into the post-data base. So, now once we have the data, now we can go ahead and work on this data using PG routing. So, this is the visualization of the data we have. But the problem here is that the rivers are made up of multiple edges. So, we need to find that each river, we need to find all the edges which belong to a river. So, how can we solve this problem? So, this problem can be solved by using PGR connected components. So, the next step here would be creating river components. And we use PGR connected components which is a tool which gives the components of an undirected graph using a depth-first search-based approach. So, when we use the tool, so we get the output like this. So, the different colors over here signify the corresponding components. So, each component may contain multiple edges. Now, let us proceed to the next step. So, now we have our components as well as our city. So, next step would be creating a buffer around the city. So, we use the post-data function as the buffer to create a buffer around the city. So, to visualize it more clearly, so this is how it will look like in your GIS. So, after this we will be finding the intersecting reverse in the buffer to take to find the rain zones ultimately. So, we use SP intersects with the reverse and the buffer function to get the intersection. So, as we can see over here, these three edges or reverse are intersecting with the buffer. Now, to get the rain zones, we use SC buffer function and this is the output. But as you can see, this output seems a bit messed up because as I told earlier, this edge is made up of multiple edges which is causing these multiple polygons to appear. So, what we can do is we can take the union of all those polygons and get a combined rain zone. So, we have achieved the objective for this exercise that was to get a rain zone where if it rains, the city will be affected. So, if the local administration gets the news that it is raining heavily anywhere over this area, it can raise a flood alert and alarm the citizens to stay away from the water banks. So, that was the end of the exercise. Now, I would like to tell some learning experiences which I experienced during this UN educational challenge. First was to contribute to the United Nations Sustainable Development Goals. So, contributing to such a big goal which is envisioned globally was a big pressure for me. Second was creating workshops with globally reproducible exercises. So, just by changing the bounding box or by changing the area, the same exercise can be reproduced for any area. It can be used by many users. Then, solving real-world problems using open source tools, of course, and then graph algorithms like CRUSCAL, FRAME, TRIVING DISTANCE, etc. Also, the main learning was that easy routing can be used for the exercises or the problems other than routing vehicles. So, yeah, once again, I would like to thank my mentors and also the OSHO UN Committee for giving me this opportunity to take part in this challenge. Also, I learned a lot from the very lively PZRouting community and I would love to be its part in the next challenge also. So, thank you everyone and you can find the whole work of this UN challenge at this QR code. So, just before saying bye-bye, I would like to present the work which I had done in a very less time. So, yeah, this is the website where the work is published and these are the five chapters. So, we saw the fourth chapter right now. So, this is the third chapter which talks about estimation of populations served by the hospital. So, this chapter contains detailed exercises which are very nicely explained and which can be done very easily by anyone who knows very basics of PZRouting. So, I will quickly just show the outputs. So, here we see this is the hospital and this is the service area or the roads served by the hospital. If you see, this is the generalized service area and ultimately what we are doing is we are estimating the population in each building. We are storing it into roads and then we are taking the sum of all these roads, population from all these roads and using it as a dependent population for the hospital. Yeah, and the final chapter here is to find is optimizing the electricity network. So, this is the sample network which I had taken. So, when the electricity distribution lines are laid, it is not laid on every road. So, the network needs to be optimized to bring the cost effectiveness. So, this aligns with the affordable energy goal of the UN and when we use a minimum strengthly algorithm on the road network, it gives us the shortest or the shortest path, the optimal path where the electricity network can be laid and which reaches every locality of the city. So, this will increase the cost effectiveness and which will make the energy affordable. So, yeah, that's all from my side and you can of course find the whole work at this, at this website. Thank you so much. Supply is appeared. Oh, I had some questions for him. I hope that he will join again. Let's see. Yeah, so the session is finished. It was really interesting to see all these young students to work on open source geographical software and also in documentation. And I hope that you like this session. And I think that it was a really good idea to give the opportunity to the student to present their work. Okay, he's back. So, I'm sorry, you remove yourself. Okay, so actually the table. No problem. No problem. And I have a question. Short question. Okay, so one is coming from the from the audience. How is scalable in this workshop for other areas of study with respect to data usage for analysis. So the data used for this challenge was a sample data, which was taken from OSM. So, the data can be downloaded by changing the bounding boxes. And also, basically it takes into the OSM files. So you can use any bounding box of your required area. And this can be easily reproduced by following those steps. Okay. And, okay, you already asked for two one question. So the data are coming from open stream up. Yeah. And did you already know PG routing or it was the first time that you get in touch with the software. I had done a small project. It was for routing of emergency vehicles. And my grad studies only. So that I had my introduction to PG routing before. This was my second kind of project, but very detailed one. Yeah, we saw a lot of example and command. Yeah. Okay, so there are no other questions and the session is ended. So, I will close the live session and there will be a keynote speech now on the Malena Libman room. See you there. Okay.
|
The participants of this year's Google Summer of Code on OSGeo projects and winners of United Nations-OSGeo Education Challenge 2021 will present their work and their experiences contributing to OSGeo projects. This year, 12 students were assigned an OSGeo project to contribute to this Summer. Browse the projects at https://summerofcode.withgoogle.com/organizations/5336634351943680/ 2 Winners were selected for the United Nations-OSGeo Education Challenge 2021. Browse more details: https://www.osgeo.org/foundation-news/2021-osgeo-un-committee-educational-challenge/ Authors and Affiliations – Shinde, Rajat (1) Chauhan, Rahul (1) (1) Coordinator of OSGeo in the GSoC (OSGeo GSoC Organization Administration Team) Speakers: Francesco Bursi - francesco.bursi@hotmail.it Linda Kladivova - lindakladivova@gmail.com, L.Kladivova@seznam.cz Caitlin Haedrich - caitlin.haedrich@gmail.com Aaron Saw Min Sern - aaronsms@u.nus.edu Aniket Giri - aniketgiri770@gmail.com Aryan Kenchappagol - aryan.kenchappagol@gmail.com Sandeep Saurav - sandeep.saurav97@gmail.com Sourav Singh - srvsingh1962@gmail.com Ayoub Fatihi - ayoubfatihi1999@gmail.com Ashish Kumar - ashishkr23438@gmail.com Veenit Kumar - 123sveenit@gmail.com Han WANG - hanwgeek@gmail.com OSGeo-UN 2021 Challenge Winners Patrick Happ - patrick@ele.puc-rio.br Swapnil Joshi - swapniljoshi@iitb.ac.in Track – Community / OSGeo Topic – Software/Project development Level – 1 - Principiants. No required specific knowledge is needed. Language of the Presentation – English
|
10.5446/57299 (DOI)
|
Hello. We are back. Sorry for the inconvenience. We had a presenter not showing up. So we were not able to broadcast for the past two or three minutes, but we are back. And we are back for track hit and still found here. I was adding, welcome, Silvan. Thank you. Hello, everybody. Thank you. Thank you and welcome all for being here with me, with us for that presentation. Thank you. So Silvan is an engineer working at Oslandia, a French company offering services in GIS, open source, QJS, PostJS, Web3D, and he's been working in geographical systems development since 1998. So Silvan, the stage is yours. Thank you. Thank you, June. Thank you. First of all, I'm sorry for my English, which is not perfect far from that, but I would try my best for 20 minutes. A few more words to begin. I'm Silvan. I'm 46 years old. I've been working in the GIS software for 20 years now. I've seen many, many technologies and many problematics I had to solve with many tools over the years. But in a way, I'm considering myself sometimes like always a beginner because I know I've always got more to learn and discover. So as June said, I'm currently working at Oslandia. We work with open source GIS software, QJS, PostJS, 3D, data artificial intelligence and more. And so, as we said in French. Okay, enough about me. Let's talk about TeriStory, the story of the territory. Maybe we could interpret it like this. TeriStory is a website, a public website. You can go after the speech if you want to check out on teristory.fr. It's located in France and it deals with French data for now. It's an online tool for tracking and directs the energy and ecological transition of territories. The website is public. Anyone can go and check the data. But it is more appropriate for people who are in charge with that kind of data. Even the decision makers, the territory planners, maybe the politicians, some kind of people. The platform is built with open source technologies, of course, but the application itself is not yet open source. Soon there will be a release so anyone can deploy its own version with its own data. We talk about energy, not only energy, but in the main part. The earth is changing, the climate is changing, the natural resources are important for us and for the future. We have built societies around energy. We need energy. Maybe one of the most important things we have to care about. Energy data is geographic, where it is, how it evolves, what can we do with those data? We can observe, we can study, we can localize, we can manage, we can plan for the future. Before talking about the functionalities and techniques, let's talk about people around this project. Our energy environment is a French association acting on account of the Overn-Ronalp region to promote sustainable energy. ORAE, as we said in French, started the Territory project a couple years ago in order to put their data on the web. They are based in France, near Lyon. Many people work there and are our customers. Me, my colleagues, we built the whole project for the technical part. But we have also supported in a modern way the development of the platform with modern methods and I will show you that in a few minutes. Today, the project is running almost without us and we are very, very proud of that. About the project, the project on evolution, at the beginning, a few years ago, the website was a simple demo, a single page app with functionalities, a map, a legend, a small widget. It was a proof of concept. Question where will it work? What data can we display and manage? What are the perspectives for the future? In a small amount of time, we built the first version within 15 days. Now, years later, the proof of concept has been fully functional. We have still a single page app but with much more functionalities and much more people involved. It's a complete platform for data observation. We got dynamic maps, dynamic charts, simulation, scenarios, impact on local employment, dashboard and much more. At the beginning, there was only one actor, one region. Today, there is a consortium that has been mounted with many people, many regions to give you more specifics. There are 18 regions in France which are subdivisions of the country. France is about 1,000 km by 1,000. The project has become a national project. It becomes a reference platform from that kind of data observation. We can make an analogy with open source because the project is really similar or coherent with an open source project. Why is that? Because a simple project to a full platform, a single developer to multiples from various regions, a single founder to multiples, organized, a single actor for the roadmap definition to mutualize one. Of course, there is a lot more to do but we like to wait for the project to evolve in the time and we will continue that way. What can we do with Territory? First of all, we can access a map. I will show you some screen later. We can access a map of the region we choose. We can realize administrative limits. Once it is done, you can choose to run analysis on different themes around energy but also mobility or recycling for example. The analysis consists of a map representation with charts. The charts are pieced, bars, bars. You can apply many filters to allow to go far in the direction of the data and get precise and to get precise and useful information for your work. Sometimes data are confidential so the platform can easily deal with that and blur the data if needed. You may build dashboards for reports. You may build predictive data charts, very useful and powerful with your mission. If your mission is to observe a particular kind of data for a couple of years. This has been a long and complex part of the platform, the predictive parts. A quick overview of the architecture. As you can see, there are two main ways. The first one is the OpenStreetMap background processing. We take some data from a database OpenStreetMap. Import what we need and we have the vector type server, which is post-type, to process the vector types. Below, we have the main data processing. Many sources of data are imported in the PostgreSQL database. The import processes are custom made in Titan and also use a classic importer like SHP to SQL or simple copy from PostgreSQL. Many data are CSV file, Excel file. From the main database, data are processed to become also vector types and JSON or GSM for the front application. This is not a very complex architecture. It's a classic but we stick to the keep it simple and it works well. Let's talk about technology, the front size. As I said earlier, it's a single web app. We wanted the application to stay light, very light, clear and very intuitive. So people coming to visit the website are sometimes unaware of that kind of specific website. The front has been built with React.js. We used to use Angular at the time but that day we decided how we choose React. Sometimes when you choose a technology, it's just a matter of fashion as soon as the tech is open source and persistent, why not to try it? React is the main framework for the front. To display the map, we use open layers, I've been using that library for a long time and what I've got to choose between others, the flat for example, I still stick to open layers because I like it a lot. The charts are built with a chart.js, a simple chart library but very efficient. Then there are many, many frameworks for that and all is a matter of preferences. And of course, behind all this, there is a lot of all-media code to rely on. Here is a view of the application, a map, full page, the widget on the left side and on the right side, the charts on the bottom, very classic, very efficient. We have time at the end. I can show you live on the application. What you see there is an indicator about energy consumption. The big circle is Lyon here. Lyon is the third or second crowded town in France, so you can easily understand why the circle is bigger. A lot of people, a lot of energy consumption. The bias at the bottom are shown by categories, the repetition, the division of that consumption. We see that home and transport are the main sources of consumption. You see a yellow brown there. So you got a lot of analysis. You can choose to display data and get very practical information. The backside has been built with Python, which is a language we cherish. I think I was off for a little bit of time. I'm sorry. Python is the language for the backside. Sonic is used to build the main API. A simple framework, but again, very efficient. Besides we got utility libraries like Alambic, Pandas, PyTas, and of course again, a lot of custom code to serve the data to the front. For the data, the data is stored in Postgres with Poges to manage the special data. There are two ways to populate the data in the platform. First, an administrator can do it and load the most important parts. We've got local administrators. If they've got enough rights on the platform, they can load a small amount of data specific to their administrative zone. First is one of the most important things in the platform and the special care has been done for that. We've got a lot of check control to ensure the data is correct and there are many, many people checking that before the data goes on the production website. We've got three servers, development, test, and production. Front and back are all disconnected. There is back, it's just on rest API serving the data and the front gets the data by calling the API routes and receiving JSON or geochrism for the most part. Of course, we have some specialized routes for serving PDF files or images. But front and back could be interchangeable if one day a technology becomes obsolete. The application has been built in a very generic way and today we deal with energy, but tomorrow when the code is released, anyone can take it and manage any kind of data. It's very, very generic. It needs a conception. The background layer, we deal with maps, so we need background layers. Of course, open street map data with custom look, very light with a few colors, labels, in order to leave the efficient data, the main part of the map. We use vector types for the layer, very light and fast. Easy to use with post-style. We should use a PG tile server very soon in replacements. No Raster, no WMS, no WFS for the application because no need to, maybe in the future if we want to connect external services. So that's it for the technical part. Now for the project management, because it's quite interesting, we manage all with GitLab. At Sousia, this is our main tool. We use it for our internal management and, of course, for our project. For the code versioning, we use tickets to communicate with the customers. We have made a lot of teaching with our customers about that. It was communicating a lot by email at the beginning. It was resulting in a very, very confused management. We spent many, many hours to teach the modern ways to follow a project. And today, a few years later, the customer used GitLab as much as we, we are developers, to do. And sometimes in a better way than us. We are very, very proud of that evolution. It makes the project development even more, more enthusiastic. So our tickets are now the exclusive way to share information. Of course, we talk on the phone to see each other if needed. We use the broad view. In France, we say Camban a lot. And of course, the continuous integration used at its full potential. We can focus on the code, on the functionalities, and leave the admin system be automatic. This is very, very useful for me because I'm very allergic to system operation like deployment and so on. I like to code the functionality, but the deployment, if it's automatic, it's okay to me. We've got three servers and we save a lot of time by using those tools. Now and tomorrow, the next big step is to release the source code. That won't be an easy task. Why that? Because as the project becomes bigger and many people are involved and make decisions and give money, they want their functionality as soon as possible. There is a new release about each month. And sometimes we have to hurry in order to stick to the schedule, decide the consortium. If you are web dev, you know that hurry may be not good for the quality of the code. So the consequences are we must sometimes do the code refactoring in order to keep a level of quality. So for the open source release, a big review of the code will be made and we need some time for that. The people in the consortium don't always see or understand the difficulties we can have building the application. Again we communicate a lot with them to explain why building a new functionality may take a week to code. What to think about this project? I think it's very unique because we deal with a lot of projects at Auslandia, a lot of different customers, a lot of different people, each one is different. Sometimes it goes well, sometimes it goes a bit wrong, sometimes it's great. And with TeriStory, it has been great from the beginning till now. That's why I'm talking of that project today in front of you. Of course we went through problems and very painful days while developing the platform but I would say it's price to pay for a good result. We are very proud when a project goes that way and becomes a really successful one. We can't wait to have the code release open source so we can use it for other data values projects. I could say much more about the project but I think the time is gone and I will be there for any question. So I thank you for being here with us. Quickly when you ask some questions, if you have some questions, I can show you the application live and thank you. Thank you. Thank you, Sylvain Merci beaucoup. So there are a couple of questions there in the venue so we can start with them. The audience may have other questions. Thank you very much for the update and the background. So the question is, I know this project has been started before it was first released but do you think PG Feature Serve could have been used to replace part of the back end with as well functions? Yes, I was saying a few minutes ago that when we chose the technology behind the project, it was a matter of preferences or knowledges. So at that time, if PG Feature Serve has been ready, maybe we could have chosen this one to start with. And now we are not close to get a new replacement framework. So tomorrow if we don't want to work with Sanyq anymore, why not use PG Feature Serve to do this? I was talking about POSTile. POSTile will be replaced by another modern framework to serve the type. So yes. Thank you. There are other questions regarding the functionality of the interface. Some of them may be there, some of them maybe we're planning to implement. I don't know. But how can reports be shared? So can we export it through PDF or can we embed the map in another website, etc.? Is it possible to link other data sources, other online data sources, use other online data sources within the system and also multi-language support? So are there something that's planned? So for the first question, yes, we can, the report can be shared. We can export as PDF, as images and the program all is built in the platform. You can also prepare a map like the one you see on the screen and share the URL and the people receiving the link will have that map and can include it in another website. So sharing data is the most important thing in that platform. We cannot yet link online data sources, but we will in the future. And multi-language is not yet because it's a French project. But when the code will be released, open source, I guess we will implement that possibility. So anyone can connect and use the platform for any kind of data in any kind of, in any languages. So yes, we will do that in the future, I guess. Okay, thank you. It looks very interesting and useful. So thanks a lot for the work and thanks a lot for the efforts and thanks a lot for sharing your studies and your work here. So as quoting Bolivia, it's very inspiring. So if you have any additional comments, you can go ahead because we have some time. So if you have additional comments, you can do so. Okay. I don't have particularly, but I can navigate in the website. You can just watch me. Okay, so we have two or three minutes for that. Let's go ahead. Okay. As you can see, we are in the southeast of France. Lyon is there. I don't know if you know Marseille over here, the Mediterranean Sea here. And the first region was that one. And you can see a subdivision of the region. You have to choose an administrative zone. So you can take a region with department, region with commune. I don't know in English how we say commune. Never mind. Towns. So you choose a division and then you go in the analysis menu and choose an analysis to launch. Most of the analysis are, sorry. Connection is not so good when I do some screen sharing. So sometimes I've got some problems. Okay. Most of the analysis will show you some circles, you know, proportional circles like this. Because it's the most efficient way to show quickly some data. So for the most part, you will see a map like this with a pie like that on the bottom. But we've got on other analysis some donuts or some bars or some Instagrams. We talk about energy, but there is also data about mobility, pollution, climate, and very, very a lot of themes to use the data. I say again, the platform is very generic. So when it would be a release open source, you can take it and inject your own data of any kind and it will work. So go to check terrissory.fr if you want to have the speech and you can manipulate and see. Of course there is a connected mode, but it's reserved for the administrators. I think it's maybe time. So I thank you again. Thank you, June. Thank you. Thank you. You're welcome. Thanks a lot for that presentation. And if you have any further questions or if you want to follow up with Steve Van, just feel free to find him in the venue this platform and send a message. So I think he will be able to answer any further questions coming from me even after this session. So thanks a lot. Thanks a lot for that great presentation, Sidvan. Thanks. Have a good afternoon. In France we are afternoon. Yes. Bye-bye. Goodbye. See you. Thanks for the rest of the conference.
|
Terristory is a web platform providing a sustainable energy observatory oriented towards decision-makers and territory planners. AURA-Energie Environnement is a French association acting on account of the Auvergne-Rhône-Alpes region to promote sustainable energy. Aura-EE started the Terristory project a couple of years ago, in order to put their data on the web. Energy data is geographic by nature, and one of the main aspect of managing energy is being able to observe its characteristics on a given territory. From a simple data viewer, Terristory evolved into a full platform for data observation. Dynamic graphs have been added, and some advanced features like : - create scenarii on Energy equipment ( e.g. build a methanizer ) - impact of decisions on local employment Terristory is based on OpenSource software : PostGIS, Python, OpenLayers, Vector tiles… The full code for the Terristory platform itself is opensource and will be published publicly in 2021. Terristory was initially funded by a single actor and deployed in a single region. In 2020, the project accelerated : it evolved into a consortium to support the platform and deploy it in other regions. This evolution made Terristory a national project, and a reference platform for energy data visualization. This mutation is interesting on multiple levels, as it is totally coherent with an opensource project : - from a simple project to a full platform - from a single developer from a single company to multiple developers from various origins - from a single funder to multiple funders organized as a consortium - from a single actor for roadmap definition to a mutualized roadmap This transformation makes the project's history and experience unique. The battle for climate is open, and platforms such as Terristory have a strong role to play. It should be an inspiration for any project oriented towards opensource, opendata and resource mutualization. Terristory is a web platform providing a sustainable energy observatory oriented towards decision-makers and territory planners. AURA-Energie Environnement is a French association acting on account of the Auvergne-Rhône-Alpes region to promote sustainable energy. Aura-EE started the Terristory project a couple of years ago, in order to put their data on the web. Energy data is geographic by nature, and one of the main aspect of managing energy is being able to observe its characteristics on a given territory. From a simple data viewer, Terristory evolved into a full platform for data observation. Dynamic graphs have been added, and some advanced features like : - create scenarii on Energy equipment ( e.g. build a methanizer ) - impact of decisions on local employment Terristory is based on OpenSource software : PostGIS, Python, OpenLayers, Vector tiles… The full code for the Terristory platform itself is opensource and will be published publicly in 2021. Terristory was initially funded by a single actor and deployed in a single region. In 2020, the project accelerated : it evolved into a consortium to support the platform and deploy it in other regions. This evolution made Terristory a national project, and a reference platform for energy data visualization. This mutation is interesting on multiple levels, as it is totally coherent with an opensource project : - from a simple project to a full platform - from a single developer from a single company to multiple developers from various origins - from a single funder to multiple funders organized as a consortium - from a single actor for roadmap definition to a mutualized roadmap This transformation makes the project's history and experience unique. The battle for climate is open, and platforms such as Terristory have a strong role to play. It should be an inspiration for any project oriented towards opensource, opendata and resource mutualization. Authors and Affiliations – Sylvain Beorchia, Oslandia Vincent Picavet, Oslandia Pierrick Yalamas, AURA-EE Track – Use cases & applications Topic – Data collection, data sharing, data science, open data, big data, data exploitation platforms Level – 2 - Basic. General basic knowledge is required. Language of the Presentation – English
|
10.5446/57300 (DOI)
|
Okay, so hello Bay 1, welcome to First 4G 2021. Today is Wednesday and this is the Concava Throne. For ending the day, we will be having a couple presenting eco-ecovaluator, basic ecosystem service valuation for custom landscapes. Is that right? Yeah, everything is set now. Okay. Is it time to start? Yes, sorry. Go on. Okay. Thank you. Hi everybody, thanks for being here today. I'm happy to be here presenting at the Phosphor G conference. So thank you all for attending my presentation and I'll give a quick hello in Spanish too. Hola a todos. Estoy muy contento de estar aquí con ustedes y muchas gracias para atender mi presentación. So my presentation is titled eco-evaluator, basic ecosystem service valuation for custom landscapes. And my name is Eric Perper. So just a really quick introduction to me. So, like I said, my name is Eric and I live in Virginia, Charlottesville, Virginia in the United States. So I'm on the east coast of the United States. And just to give you an idea of where I am. So Charlottesville, Virginia is the yellow dot here. And we're about two and a half hours drive from Washington, the capital, Washington, DC. So anyway, I work at the University of Virginia, but this product, this presentation is really not related to my day job at all. So we've got a long GIS history, then the GIS user for about 15 years now and gotten more into the FOS 4G community over time. And right now I love it. And like I said, I'm happy to be here with all of you. And when I'm not at work or doing a side project, you can find me rock climbing or skiing. So I don't work for Key Log Economics, but my friends and colleague is the principal at Key Log Economics. And I suppose I was a consultant in this role in this project. But anyway, Key Log Economics is the sponsoring organization of this plug-in for QGIS, which I'll talk about next. Just to talk about Key Log a little bit, there are an environmental economics consulting firm which basically tries to prove that saving the environment makes economic sense. So they're working on projects around the world really. And like I said, trying to prove to stakeholders that it is in their best economic interest to save the world or save, save nature, I should say. So EcoValuator is basically a Python plug-in for QGIS. I won't be really talking about the code or the programming because I don't think that's really the interesting part of this project. But this is freely downloadable for any QGIS user. So I'll provide a link to that later. And this project was funded by the National Fish and Wildlife Foundation, which is a United States-based governmental organization. So basically, the EcoValuator plug-in is a simple means of estimating the dollar value of ecosystem services in a study area. So that might beg the question, what are ecosystem services? So basically, ecosystem services are the benefits that people obtain from the natural environment. And I should say these are the economic benefits that people obtain from nature. And there are quite a few of them. They are broken into several categories, such as provisioning services, regulating services, supporting services, and cultural services. So as of right now, the extent of our, I guess the geographic extent of our plug-in is limited to North America. So I'll explain more about how to use this, but basically we start with one of two land-use, land-covered data sets. So at the moment, those are the National Land Cover Data Set, or NLCD, which is basically land-use, land-covered classification for the continental United States. And that's here on the left in the white background. And the other land-covered data set is the North American Land Change Monitoring System. And that includes the United States, Canada, and Mexico. So each one of these data sets basically breaks their area of interest into individual pixels, and each pixel is classified with a land-use or land-covered data type. And there are about 20 land-covered types in each one of these data sets. And some of those, like you can see here on the right, urban areas are classified in red, in this case. Forest areas are in green. There's desert areas, you know, Arctic tundra, and everything in between. So the EcoValuator plug-in is a three-step process. So the first step of the process does this, which is it estimates the ecosystem's service values for your study area or for your study region. So you start with, let me back up for a second here, you start with a raster data set, which is either the National Land Cover Data Set or the NAL CMS. And then also just a vector layer, a vector polygon of your study area, which could be as large or as small as you like, as long as it's in North America at the moment. So a few things that step one does, first of all, eclipse the input land-cover raster to the extent of your study area. It also calculates how much area each type of land cover is present in the study area. And then it also multiplies those areas by each of the associated per hectare ecosystem's service values. So let me show you that in action. So in my little example here, our study area will be Alomarro County, Virginia, which is the county that I live in. So just to give you some frame of reference here, this is the extent of the study area. And the city of Charlottesville is in the middle of it. So as I said, in step one, there are a few results. So first of all, we start with the National Land Cover Data Set or the North American Land Chain's Monitoring System Data Set. And it clips that to the extent of your study area. So in the event that my study area is Alomarro County, you can see that it is clipped there. So in this study area, you can see that there are some urban areas in the red, some forest, land cover types in the green. The light green is, or light green yellow is like farmland. And then there's a few others in there. So that is one output from step one. Another output from step one is a table, as you see here. So basically this, so as I said, this clips the raster area to your study area. So this classifies, first of all, the study area, the, sorry, the land cover types that are present in your study area. And it gives you a size or a land, you know, the amount of land and hectares of each one of these land cover types. So grassland slash herbaceous land cover type, there are 4,166 hectares in our study area. And then each one of the ecosystem service values or ecosystem services available in that study area is represented. And then it gives you a minimum, a maximum and an average estimated dollar value, which is the per pixel worth of each one of these pixels by the ecosystem service. So I'll talk more about that next because you might wonder, where do these dollar values come from? Who decided on these? So full disclaimer, I was not a part of this part of the process. This is the domain of economists and I'm not an economist, but this information was gathered or created using the benefit transfer method. So the benefit transfer method provides an accessible way of estimating the value of an ecosystem service flows based on the values estimated in another similar setting. So what the people at Keylog Economics did was that they referenced governmental documents, scholarly publications, etc. That performed ecosystem service valuation in similar regions and similar land cover types around the world and they use those values to extrapolate the values of any given ecosystem when you're using the eco-valuator plugin. So step two of the process is mapping the value of individual ecosystem services. So as I said, you take the output from step one and create a new raster for which the value of your study area is represented per pixel for the user selected ecosystem service. So as I said, there are quite a few ecosystem services that you can choose from. There are, I think, 15 or 16 of them. So basically the output of step two looks something like this. So I know the one on the right here is a little bit hard to see with the colors. But anyway, if we're evaluating the per pixel value of the air quality ecosystem service, the output looks like this. So basically these are categorized from light to dark in a graduated color palette. So the white areas are low. Basically they're not valuable for the purposes of air quality in our study area. The darker values are more valuable and then the shades in the middle are somewhere in between. So if you're unfamiliar with the geography of this area, basically, like I said, the city of Charlottesville is the main city. That's sort of in the middle and there's obviously fewer trees and things like that. And these areas over here in the dark colors are less human developments. These are forests and things like that. Concerning the biodiversity value per pixel, it appears that basically rivers. So there's a river that runs through the study area right here, which you can see. There's also a river on the southern border and then a few lakes and stuff like that in the middle. Those are more valuable for biodiversity. And then step three of the process is that it creates a print layout and exports the layout as a PDF. So in short, you can make a nice map with just a few clicks. And obviously if you are a, you know, QGIS user or have some cartography skills, you can make your own output. But the purpose of this is that somebody who does not know how to use the print composer and QGIS can just input a little bit of data and outcomes, a nice looking map. So this is the template for what it looks like. As you can see, I took the average, the air quality results from step two of the process and created a map with the print composer. So you can see there's a, you know, basic things that you see on every map like a title and subtitle, a legend up here. So one thing that I maybe should have said earlier was that the range of values represented by the ecosystem service value you chose and then some other factors that is divided into quintiles and then each quintile is represented with color from the color palette. So as you can see, the dark colors are more valuable. The light colors are less valuable concerning air quality in this case. And then a little bit of credits tax there. So future development we'd like to do, this is an ongoing process. It's been ongoing since 2018, I believe. And I've been a part of it since then. So at the moment, QG or sorry, Q log economics has expanded into Vietnam. So they're working more in Southeast Asia in general, but particularly in Vietnam. And they recognize that Vietnam had unique opportunities in the environmental economics arena. So they've actually my colleague, the principal of the company has moved there and set up shop in Hanoi and they're doing projects in and around Vietnam and Southeast Asia. So naturally we like to expand other land use land cover datasets that are relevant to Vietnam in this project so that basically somebody can get a quick estimate of the value of a study area. If they're working in Vietnam or elsewhere in Southeast Asia. And then we'd also like to continue to make it more robust for future versions of QGIS. We are slacking and currently the project is developed for version 3.10 of QGIS. And I know that the current long term release version of QGIS is 3.16. And I forget the development cycle, but I'm sure that a new long term release version is coming out soon. So we'd like to make it relevant in the future. And I'd like to acknowledge my colleagues that I've worked with on this project. So first of all, I key log economics Spencer Phillips and Anna Perry in particular, and then also my fellow developers, Phil Ribbons and Elliot Kurtz. None of us were really skilled developers in the beginning and I'm not sure we can say that we are now. But we learned a lot about plugin development along the way. So it was a really fun learning opportunity for us. And then I also like to acknowledge the Phos4G community. So all of you that are here today, I feel lucky to be a part of such a great community. And this presentation is my small contribution to the Phos4G community. So thank you all for being here and for attending. So that is the end of the presentation. I've got a few links if you'd like to check out more about the project. So we've got a link to the Key Log Economics homepage and then also the GitHub repository. And if you'd like to contact me, you are free to send me an email. Thank you. Okay. Thank you very much for your talk, Kate. It was very interesting. And now we will have some time for questions. So the first question will be, what do you think about the state of the documentation? The state of documentation regarding the plugin or the like Python for QGIS? I think the plugin. For the plugin, you said? Le mire moment. If the, well, I'll give you a minute for the person to respond. If the question is regarding the state of the PyQGIS documentation, I will say that it has advanced greatly. So the timing was kind of bad when we started the first round of development on this plugin. Basically QGIS 3 had just come out and naturally in an open source projects like QGIS, the PyQGIS documentation, which is the Python, the QGIS Python API for QGIS 3, it was not very developed at that point. And I will say that it has come a long way. So thank you very much to all the developers who have contributed to that. All right. Thank you. We have another question. When will the plugin be in the general plugins menu? It is in the general plugins menu. You might, so there, I don't have QGIS up at the moment, so this is, I guess, how they refer to it an experimental plugin. So if you go to the plugin wizard, and there's a box in the settings, I think, that says show experimental plugins, that should allow you to find it. I'm sorry that I don't have QGIS up in front of me at the moment. No problem. Let's go with another question. How does the benefit transfer, I think I should copy it. How does the benefit transfer build look like? How was it fed into your building? And how does it relate to that process a bit more? Thank you. Sure. I'll go back in my slides here. Okay. Let me. So this is something that I am not that knowledgeable about. So as I said in the presentation, this is an accepted method in these field of economics for estimating the value of ecosystem services. Actually, sorry, let me go back. Let me go back to my finishing slide. Take your time. We'll have enough time. Okay. So this first link here has more information about the benefit transfer method. So since we have a few minutes, I will go there right now. Let's go here. So we have a write-up page about this plugin, and there's a few paragraphs here about the benefit transfer method. So to be honest, as I said, I was not a part of the, basically, the process of creating those dollar values for each pixel in the ecosystem service values. But hopefully this will allow you to learn a little bit more about it. I just, I was just the messenger in this case, so I accepted the numbers that they gave me for each ecosystem service value, and I just use them. So if you'd like to talk more about that, I can. So first of all, this documentation page here will hopefully give a little more information. And then also, if you'd like, I can refer you to the people at Keylog Economics to talk more if you'd like. Okay. Thank you very much, Eik. I think there are no more questions. So I think that we can, well, there is another one. We still have time. Sure. First, well, there is two questions in one. First one is, to what extent do the economic estimates rely on local values based on regional values and based on national and international values? Right. So this is another question I don't feel like I can really answer. So as I said, let me scroll back here. So as I said, these, so if you run the first step of the plug-in, it gives you an estimated dollar value for each pixel for each ecosystem service. As I said, I was just given these values. I am not an economist, and I do not feel qualified to answer that question. So I will say again, what I just said about the last question, please feel free to read up a little more about the benefit transfer method at this site. And then also, I'm happy to put you in touch with the people that work at Keylog Economics, if you'd like to talk more about that. They have PhDs in the field of economics and are much more knowledgeable than I am. Alright, then that would be all. Thank you very much, Eik, for your talk and your answers. Okay, thank you very much.
|
The EcoValuator plugin provides a simple means of estimating the dollar value of recreation, water supply, food, and other key ecosystem services for a given study area. Once installed in QGIS, the tool combines satellite land use/land cover data with your own spatial data describing watersheds, conservation areas, or other areas of interest. The EcoValuator then does the work of estimating land area in each land cover type present in your region using the Benefits Transfer Method (BTM) to generate dollar value estimates of the value of the ecosystem services supported by the land use/land cover present in your region. Ecosystem services are the many benefits to humans provided by the natural environment and healthy ecosystems. This definition emphasizes that ecosystem services are the effects the environment has on people, but it is not just what those effects are that matters. It is also where the effects occur. The “where” is especially what drove us to create the EcoValuator tool in order to better understand ecosystem service effects in our area of interest. The EcoValuator tool is a QGIS plugin which uses publicly available land use/land cover data to predict the value of the user’s study area. Currently, the plugin supports datasets from the North American Land Change Monitoring System (NALCMS), which covers Canada, Mexico, and the United States, or the National Land Cover Dataset (NLCD) for the US only. EcoValuator does this using the Benefits Transfer Method (BTM) to generate dollar-value estimates of the ecosystems supported by the land use/land cover present in your study area. BTM provides an accessible way of estimating the value of ecosystem service flows in your study area based on the values estimated in another, similar, setting, called the “source” area. BTM is a practical policy analysis tool when time and resource constraints prevent more involved methods. In the EcoValuator, we employ a version of “unit value transfer” and apply estimates from source studies to the user-defined study area based on matching land cover in both areas. We began with an initial list of more than 1,200 specific estimates of the monetary value of specific ecosystem services arising from specific land cover types and have classified the source studies according to each ecosystem service and have adjusted the monetary values for inflation. The EcoValuator is a pair of algorithms the user runs in sequence. In Step 1, the input land cover data is clipped to the user-input study area. The amount of each land cover type is calculated and multiplies those areas by the associated ecosystem service value in our input table of research data. Step 2 creates a new raster for which the value is represented per-pixel of the user-selected ecosystem service. Step 3 is optional and creates a nice looking final output in .pdf format. Though this project currently focuses on North American ecosystems, we plan to expand the use to accommodate land use/land cover data from other regions in the world, principally southeast Asia. Authors and Affiliations – Erich Purpur - University of Virginia. Charlottesville, Virginia, United States Track – Use cases & applications Topic – FOSS4G implementations in strategic application domains: land management, crisis/disaster response, smart cities, population mapping, climate change, ocean and marine monitoring, etc. Level – 2 - Basic. General basic knowledge is required. Language of the Presentation – English
|
10.5446/57301 (DOI)
|
Well, as I said before, welcome everybody. We're going to start with the session. Today the first one is about 3D geoplications with C-SOM-GS, data possible uses, use cases and specifications. We have a Tiel Adams here who is going to explain and speak about it. Tiel is founder of Terrestries. He used to join the OZEO community right from the beginning. Actually, he works as a consultant for his company. And also, Michael Houdousen is a developer of Terrestries. He works mainly in the field of 3D data visualization and run similar projects with C-SOM and as a core library. If you have any questions, you can just put it on the chat. Tiel is here. So after his presentation, he can answer any questions. Located in Zurich. We are doing software development for QJS. It deals with 3D geoplications. The core we're going to talk about is the JavaScript library C-SOM-GS. We're going to talk about the data, some possible use cases and specifications. First of all, I want to talk a little bit about us, the presenters, and also have one slide about the company we're coming from. Then I give you a general overview about the C-SOM-GS library. I'm going to talk a little bit about standards and 3D tiles. And in the last part of my talk, I'm going to talk a little bit about what people did with C-SOM-GS and what we did with C-SOM-GS. First of all, probably some of you might know me. I had the honor to chair the Global Phosphor G conference 2016 in Bonn. And in my job, I'm a shareholder of the company of Terrestris. I founded the company 2002 with my colleague. And I work mainly as a consultant and also agile coach. Let me also present Michael Holthausen. The talk I'm going to present today is mainly based on the script that Michael used. He presented the C-SOM-GS library on the recently held Phosphor G-SOM conference, which is the German-speaking OSTU local chapter. I proposed him to talk about C-SOM-GS on Phosphor G as well. And then we figured out that probably my English is a little better than his. So I'm going to present the talk here. But so that you could see at least his face. Michael is a geographer. He works, I think, nearly for two years now for Terrestris as an application developer. He's mainly involved in our 3D project. So he's a real expert. Short slide about Terrestris. As I said, I founded the company in 2002. And we directly from right from the beginning on started to offer open source G-Sys. So we understand ourselves as a service provider. We come from Bonn in Germany. In the moment we have 23 people in the company and two shareholders who, like me, we actually work in the company. Mainly we have developers. And our developers are also involved in other OSTU projects or open source projects. We work in layers and Geo-EXT. You heard the talk about Geo-Styler from a colleague, which is an open source library that headed origin at Terrestris formally. And also other open source projects, like Degree, do some stuff in Geo-Server and so on and so on. Okay. And of course, beside the normal 2D web mapping application, we have quite a few projects in the 3D world. And that's what I'm going to talk about now. So first I give you a general overview. What is CesiumJS? First you can say it's a virtual globe. Or better said, it's a 3D software model for the representation of the Earth or even other planets. You can free move in a virtual environment. You can navigate to any point of interest. You can change the view point from where you look at the globe. You can of course zoom in and out at different scales. So generally you can say Cesium offers a simplified model of the real world, like any map more or less does. But you have the possibility to display more precise details through zooming in. So the level of detail can increase if you zoom in. And we're going to see how to do that later in this talk. Of course you're able to display process data in form of map layers on top of each other, like you know. So in general you can say CesiumJS is for the 3D world a little bit comparable to what open layers is to the 2D world. Though of course the projects are kind of different from each other. I've loaded a CesiumJS application here just to give you an idea about what I'm talking about. So you can zoom in, zoom out, you can browse the globe with a mouse. You can even switch to a 2D map. Then you can switch back again to the 3D map. And if you look here, that's I think more or less a place where we all would like or love to be in the moment. And even just to show you that you can switch layers. So we are now in Buenos Aires and I changed the autofoto view to the open layers based layer. I got to show some more examples later on here, but this just to give you a short impression about that. CesiumJS is released under the Apache 2.0 license. It's published via GitHub, so you can download the source code or you can download NPM packages directly use it from there. Contributions are welcome and possible, although the project is mainly steered by one company. So this is a little bit different to normal OSDU projects because they are the core developers and there's quite good community support you will find on the website there. From technical point of view, CesiumJS is based on JavaScript, on WebGL and HTML5. And JavaScript and HTML5 you might have heard about. And WebGL is a standard. So I think the actual standard WebGL 2.0 was released around 2017. Released by a company called Chronos Group. They also released the other standards. I'm going to talk about them later a little bit more. And in general, WebGL allows a hardware accelerator 3D rendering via JavaScript. It's based on the proven graphics API OpenGL. And the good thing is it's supported by all browsers on the desktop and also on most mobile browsers. So let's talk a little bit about the browser requirements. As you can see, nearly every browser supports WebGL. And so they support WebGL. So they use Web standards. And then Cesium, because Cesium also is based on WebGL as I said before. So it could be used in all modern browsers. And also, CesiumJS supports screen operations like the panning, zooming with your fingers on the screen. And so you can use Cesium-based applications on mobile devices as well. Some words about the state of the development. I would say it's really a major library. We used it in several of our projects. And it's really working rock solid. It's well documented and all around an open source project. There's a community. I listed up some highlights. So there's a connection or a combination between Cesium and Unreal Engine, which is an engine that mainly is used for really displaying really detailed 3D worlds. It more comes from the gaming industry. But it's also an open source tool. It's really a well documented API that allows to change the state of the globe. So what I did before with my mouse, you also can do by scripting. There's a basic support for rendering in the underground, which is really feasible for us because most of the 3D projects we had in our company deal with the geological underground. So for instance, there's a feature that hinders Cesium from displaying the surface as a barrier so you can look through the surface into the underground when you look from above the ground. The back face culling is supported. That's really technical stuff. But I heard that it really helps to accelerate the load of the tiles and controls which tiles are loaded and which not needs to be loaded. And they migrated it from AMD to ES6 module in 2019, which really led to a far smaller package you have to load. JavaScript package you have to load in the browser. Another cool feature is CZML, which is kind of a Cesium language. It's a JSON based format. It's very similar to KML actually. And it allows to describe points and surfaces and models and other base elements. You can display them on the globe. And also you can describe a spatial, temporal, dynamic graphical scene. So for instance, you're a bike rider, a bicycler. And when you do your tour, you plot your points with your GPS device. And afterwards you have the coordinates with the GPS device and you have the time steps. And then you can display your tour live on the globe. And you can see yourself moving around your route. Cesium allows efficient streaming and it's really easy possible. Of course, it's JSON. And it's mainly made for display in the browser via Cesium.js. Another thing to mention here, there is a platform called Cesium Ion, which is a kind of cloud-based streaming service for Cesium.js. It delivers a terrain dataset. It allows to create your own assets. And very important, we're going to talk about that a little later. It allows you to convert data into 3D tiles. Which brings me to the part standards and 3D tiles in our talk. Of course, you're able to display any normal GIS data like WMS, WMTS. You can display GeoJSONs, shapefiles, KML, or CZML-based formats. The problem a little bit is with the 3D models. Because normally the formats in which you get the 3D models, they're more kind of exchange formats for further editing. They are not optimized for web display. And for that, the Kronos group that already created the WebGL 2.0 standard, they created a new standard which is called GLTF. It's a little like a kind of JPEG for 3D. That's what you could understand under that. The problem is with these GLTF files is that you put in all details into your tile. And if you would put that into your application, then on every zoom level, any detail should be loaded. And this is not what we really want to, because that really would slow down our application. And for that, there is a format called cesium 3D tiles. 3D tiles is an OJC community standard. So it's an open specification. And it's made for streaming comprehensive 3D data. So you're able to display 3D models like buildings, trees, point clouds. And 3D tiles have a hierarchical level of detail, which is really, really important. If you look at that picture, on the top left, you see a scene of a city somewhere. And of course, if you have this zoom level, it's not needed that you load any detail into your browser. That's the state. And if you look at the top right image, you can see that all the scenes in 3D tiles are separated into these red-layered blocks. And if you really zoom in, then cesium only loads the blocks that are really actually needed for display. And on the picture on the bottom, you see that many more details of the buildings are displayed than. So this is really an optimized thing to display 3D models in the web. You can see on that building here, which is a church somewhere in France, which consists of a point cloud with an additive refinement. You also can separate these into these red-labeled blocks. And the further you zoom in, the more points of the point cloud you will see. And that allows you really to fast display really detailed 3D objects via cesium.js. I'm going to talk a little bit more about data and examples at that stage. So first of all, I want to talk a little bit about the availability of data and talk a little bit about a pipeline. So most people who hear 3D data think about CDGML. That's the format where public administration holds their 3D building data and stuff like that. And the problem is you're not really able to display CDGML directly in cesium. So what you need is a kind of pipeline that converts the CDGML data to these 3D tiles I was talking about earlier. And unfortunately in the moment, there's just one pipeline based on the cesium ion server I was talking about a minute ago. So you can go there with your CDGML data, convert it there. The disadvantage is that you have to leave the data on the server and when you exceed a certain amount of storage, you have to pay for that. But we have another idea on that. I'm going to talk about that a little later. First of all, I want to give you some ideas about possible use of cesium.js. So there's an application where monitoring of radioactivity in a seawater is done by the ERAR. There's also another application for identification of vegetation overlaps around power poles. I think a really nice application is FlightRider24. Probably you've seen it. You see on the globe, you see live tracked air traffic flying around the globe, which is really impressive. There's also a 3D data portal in Switzerland, which displays the geological data in the web. And many more examples you can see on the user stories are linked here in the slides. And even more examples you can find at the sandboxes, where you really can find a lot of examples and code snippets and stuff like that. Finally, I want to talk a little bit about what did we do with cesium.js. And as mentioned earlier, we had a lot of projects where we were requested to display 3D underground data, geological data, surfaces, boreholes from drillings and stuff like that. We also used it to visualize some surrounding pictures and we used it also for a 3D display of WMS data. The problem we had in the recent project actually was also to convert our data into 3D details. And in that project, the user of the cesium client, base client, should be allowed to upload his own data. So they had X, Y, Z data from any project where they detected underground stuff or boreholes or whatever. We developed the possibility to upload the data. And from the upload, we put the data into a post-GS database where we converted it to the more feasible EPSG 4978. And then we used a library called Py3Dtiles, which converts the data into 3Dtiles. We exported the 3Dtiles. There we generated the tilesets, the blocks you've seen earlier, and also wrote some metadata and wire these 3Dtiles. We displayed the data, the user uploaded them in the cesium client. We developed this deploying chain. I know it was a lot of pain to develop that because I was actually not involved in that project, but I heard it was really hard to figure out how to do that. But it's working well and fine up now. Yeah, I want to show you some more examples here. This is just a switch from 2D through 3D view you've seen before. I think this is a regular globe and a map on it. Here is another application we have. These are European land cover changes. Of course, it's also possible to move around through time-based layers. So for instance, if you have satellite images and process them wire timescales, you can display that. You can see here the changes. This is a 3D model which works with the 3Dtiles. You see you can move in real fluidly in the browser here and display more and more details. Of course, it's also possible to label them a little bit more. We go from Germany where we are located here to zoom in a little bit. We have an example from Cologne. Probably you heard about the big church in Cologne. When you've ever been in Cologne, there's no chance not to see the so-called Könadum. We link the point here. This is a 3D scene which is linked behind that which could be then displayed and let you feel like being on the dome platform. This is just a screenshot from the CD underground model. It's not very impressive. I'm sorry. I didn't find a better example on that. Thank you very much. I hope you have some questions or some remarks. I hope you enjoyed my talk or our talk. Thank you very much.
|
With the development of 3D applications related to geography, the standards and specifications for the provision of corresponding data are increasingly coming into focus. The presentation deals with the current development status of the CesiumJS library as well as the standards and possible uses of individual features and shows some examples from a recent project, in which we presented underground 3D geodata. Thus, this contribution can be seen as a renewal of our 2013 FOSS4G contribution entitled "Modelling 3D Underground Data In A Web-based 3D-Client". Not only are web-based open source 3D applications with a geographic reference constantly developing, but the development of standards and specifications for the presentation of 3D data on the web has also increasingly come into focus. A large number of libraries can be used for the representation on the web (e.g. x3dom, o3d, threejs, BabylonJS, Open GEE). Another library that has been growing steadily for several years is CesiumJS. This is used to process geographical questions in numerous areas. These include the real estate market, urban planning, sports or the various environmental sciences. In our talk we will present the current development status of the library and some possible use-cases of the features and data of CesiumJS will be briefly presented using projects as examples. A focus will also be placed on the requirements of the browser. In addition to the general availability and provision of data, the possible uses of individual selected features of the library will also be presented and discussed. When the world is represented digitally, corresponding data should also be placed there. Depending on the area of application, this can involve a relatively large amount of data, which is the case when dealing with underground data. Ideally this data should also be placed on the map in a simple way. There are already standards for the webbased-presentation of 2D-data in the web, new standards have been developed for the presentation of terrain, 3D models, buildings and point clouds as part of the development of CesiumJS. With 3D Tiles, an OGC community standard is now also available. Authors and Affiliations – Holthausen, Michael Adams, Till terrestris GmbH & Co KG Track – Software Topic – Data visualization: spatial analysis, manipulation and visualization Level – 1 - Principiants. No required specific knowledge is needed. Language of the Presentation – English
|
10.5446/57302 (DOI)
|
I'm gonna add to stream, studios, and your view. Hi everyone. So, very welcome to stop and thanks. You can share your screen. We're already saying. Is the... Yeah, that's me. Sorry. Is that okay now? Yes. Okay. Is your screen, it's serious. Is it right? Yep, I think so. I cannot see it because I'm on the laptop so I don't know the screen. Okay, it's alright. It's finished your screen. So, very welcome and it's all yours. Okay, good. Thank you. Welcome everyone from the Netherlands. So, I'm standing with Alice as I already said. I'm going to present with your different links how we deliver 10 million buildings. I was actually, that was my mistake, I thought it was 11, it's more 10 million buildings for the whole of the Netherlands. This is something that was done as part of a project which is called 3D Bach. So, Bach is the registration of all buildings in the Netherlands, those 10 million buildings I was talking about before. And it's an open data set by the Dutch Cadaster. And since it's provided for free, we also have some open data for the whole of the Netherlands, a very detailed point cloud. And we thought we could combine them and create some 3D buildings. So, the important concept that you need to keep from here is that in the city models we have this notion of LODs. So, we have like LOD1 and LOD2 as a basic notion. LOD1 is prismatic buildings and LOD2 is building through shape. And for 3D Bach we created three versions, two prismatic, one which is like for the whole footprint we just have one simple height. LOD1.3 which is prismatic buildings but only like when you have big changes in height, then you have a jump. And LOD2.2 which is the buildings with slopes of the roofs and stuff, so more detail to you. And you have this data set and tomorrow, Barlas de Gaen and Ravi Pérez is going to talk about that, how they created this data set. But today we are going to talk about the 3D viewer part. So, this is actually the second iteration of the project. The first version was released three years ago more or less and it was mostly a 2.5D data set. So, it was basically just LOD1 and just one height value computed per footprint in the Bach data set. And so, there was no real need for a 3D viewer back then, it was just visualizing 2D with map box as far as I mentioned and some coloring for the values of the heights. But now that we have more details, 3D models for the buildings, we needed something more complete. So, we decided that we had to have an actual 3D viewer this time. And we started thinking, okay, what do we need, what do we need? We should set some requirements because if you start thinking about ideas, well, it explodes over. So, we decided to prioritize three things for the viewer that we wanted to build. The first one is that we wanted it to look good. So, something that would be relevant for the users and people who just want to really visit because it's fun, it looks nice. The second one is we wanted to perform well because in our domain, 30 city models in general, we've seen many implementations that they have a lot of data, they have a lot of features, but they're very slow. And I'm going to talk about that in a short while. The third one is, and one is something that's easy to use because we understand that there's like a barrier of like how complex 3D data are to perceive by certain people. So, and there's also a lot of details in the data themselves. There's a lot of hierarchy, there's a lot of information for individual buildings in 3DBuff. So, we wanted to be easy to use the viewer and gradually introduce users to the data themselves. So, we saw what others did and this is a typical example of things that some platforms that you see in the city model. This is the 30 city model of Rototan, many 6-founding wall that create their own 3D buildings or 3D models in general. Sometimes you can have roads and other features of the city. But most of the solutions, they are pretty similar to this one. This is a product based on Cizum. Cizum, you probably know from already, is a 3D globe platform for the web. This is a product that is enhanced a bit by virtual city systems. That's company that was built for Rototan. And the problem that we identified here is like Cizum is very featurey. It says a lot of things, but it's not necessarily the easiest to use. Your fans of your machine can get crazy sometimes finding this. The second thing that we looked at is MADBOX CL. This is an example of a Dutch company which is called Herdan. They also did some things similar. They created 3D buildings for the whole of the Netherlands and they were experimenting with that as well. I'm going to talk about why it was excluded MADBOX later on. But this performs better, but it still has its own issues. The third one is an example that I wanted to show is a very nice thing. This is called F4Maps. It's a demo from a company. It's not an open source software, but it's an implementation of 3D data from OpenStreetMap. They basically did everything with WebGL. I think it's a demonstration of their services that they can provide. They're working with graphics and stuff. It's very nice. It's very easy to use. It performs quite well. We thought this was a nice example. It's something similar that we want. You just have the data, just a search, but there are no things to distract the user. Because it performs well, it welcomes the user. And because it's simple, it also doesn't push them away from the platform. To summarize, we looked at these options more or less. We had CISM, as I said before, which is hard to customize. It's a bit bloated. It's a problem with CISM. It's a general purpose thing. It's supposed to do a lot of things. Eventually, for what at least we needed, it wasn't probably the best solution. It wouldn't perform as well as we would like to. The second one is Mapbox, as I showed again. The problem with Mapbox is back then, we found the licensing a bit confusing. That was even before the whole drum of Mapbox actually closed sourcing their software or whatever. But besides that, and the fact that we didn't want to look down to ourselves, to technology that might change licensing in the future, which turned out to be a wise decision, after all. It's not really truly 3D. I mean, super-performance is very well made for 2D data, and sometimes 2.5D data. Like if you just put some data in a high value, it will extrude things and it will be very fast. But if you actually want to show your own 3D data, like ours, with shapes and stuff, then you have to deal with 3JS, which is a technology for Web 3D. And in that occasion, then you'll have to do all the things that we eventually came up to do anyways by creating our own solutions. So we also discarded that. And for similar reasons, also, hard to get, that was supposed to be hard. Sorry about the typo there. So we realized that the customer made solutions probably the most promising way to go forward. Because we would only implement what we wanted for ourselves, and we could tell you to our needs specifically. So this is the basic architecture of what we build. Sorry for the graph. This is the best they can do by hand. But the architecture is basically based on 3D time. So we are sure about 3D times a couple of times already. Polytechnic, GCC, Community Standard, et cetera, et cetera, et cetera. So we decided to create or to export all our buildings in 3D times. And George is going to talk to you details about that. And there is a library which is called 3D Tiles Renderer, JS. And this is built by NASA. I think George is going to talk about that. And this consumes data. And then we build our own terrain renderer for 3JS because we couldn't find any. So that's how we provide WMPS. And everything is managed with 3JS. So the scene is managed by this library. And then we use 3JS for the user interface. And that's how we build our 3D path viewer. And now I will ask Jordy to present you the 3D hands. Thank you, Sadio. So let me take over the screen. Yes, you can just talk and I will just keep it there. Oh, OK. OK, let's do it like this. OK, so now after the introduction of the viewer, why we chose to create one ourselves and its basic architecture, we would like to go into a bit more cesium, which just grabs our massive 3D Jo special data that can be streamed and visualized in an efficient way. At least that's why it's important to us. While maintaining CRS support. And our data is really massive because, like Stelio said, we have approximately 10 million 3D buildings, which amounts to about 48 gigabytes of compressed data, 3D tiles data. So it is a lot. And probably many of you already know about 3D tiles, but maybe someone just got into the conference or just a very, very quick overview of what 3D tiles is. So basically it works like this. 3D tiles is built upon a GLTF 3D graphics standard, which contains the data that's actually visualized in the viewer. And the GLTF are embedded in files called B3DM, which stands for batched 3D model. So basically you have a large amount of B3DM files stored on the server. Yeah, these files are individual tiles. And whether or not these should be downloaded and rendered by the viewer. So downloaded from the server, rendered in the viewer. When viewing a specific location, it's basically determined by the 3D tiles set. And that's for efficiency reasons. So the 3D tiles tiles set defines a spatial hierarchy. In our case, it's a quadtree. And as you can see in the figure here on the right, the tiles set contains children, which denote the nodes of a quadtree, and ultimately the content, which are the leaves. And... You know, someone from NASA. Yeah, we use it to visualize the data directly with 3JS. And what the library does is, well, it traverses the tile set, so the quadtree, and only downloads and renders the tiles that are within view, thanks to the spatial hierarchy. So not all individual tiles have to be checked on being within view, which is what makes it so efficient. And in the next image, Telyos, yes, you can see kind of an example of how it works behind the scenes. So basically, this is the debug mode of the library. And, yeah, we are showing this for demonstration purposes. So individual tiles here are shown visualized with different colors, and they are visualized together with their bounding volumes, which are defined in the tile set, which is what's actually checked on if it's within view or not. So basically, yeah, due to the quadtree structure, there are many nodes that can be skipped and don't have to be checked on being in view or not. Next slide. So our quadtree is based on having approximately similarly sized tiles containing a similar amount of buildings. So in this image you see an example of our quadtree. It's just an area in the Netherlands. And we generated the three-detail states accordingly to the same quadtree structure to keep things consistent. But, yeah, the current subdivision in the quadtree, however, is probably suboptimal. And this is always like, I forgot the exact number, 2,000 buildings, I think, approximately in every leaf. And the value is basically chosen on a value that sounds good, that kind of works good for us. It's not thoroughly tested, could probably be improved as well, but it works very well for us. And on to the next slide. So our original buildings, they reside in a Postgres database. That is where they are created in. And from there, like as a whole in 3DBog, in our project, we serve the data through WMS and WFS. We also export the data to CityJSON, GeoPackage OBJ to download. And then lastly, there's, yeah, the 3D tiles for visualization. So for the export from the database to 3D tiles, we use an exporter called PG2B3DM, which stands for Postgres to be 3DM, 3D tiles. And this is originally created by a company named Geodon, Dutch company. And, yeah, it's very, it's been super useful to us. What it does is it takes triangulated 3D geometries from a database, generates a quadree base on it, and converts it into GLTF slash 3DM tiles, and then together with the 3D tiles, tileset. And then, yeah, of course, you can also use the data in Mapbox and CZM if you like. And, yeah, we also for this library just because... Stadios, do you want to... I'm sorry. We can't hear to Jordy, so... You can't hear me? Yes, you can repeat some... Okay, last part, please. We list the last part here, but the exporting. Yeah, the working, sorry. Okay, excuse me. So, yeah, as I said before, we already had a pre-generated quadree, so we didn't need PG2B3DM to do this for us. That is why we forked this program, and we just read our own quadree and basically export the 3D tiles based on that. Okay, next slide, please. I hope you can hear me well. Okay, so to further optimize the visualization performance, we tested how compression could improve it. So we tested GZIP, Draco, which is a library for the compression of three matches, and the combination thereof. Are you, in this case, GZIP there is... GZIP was proven to be more efficient than Draco, and we're going to wait for Jordy to be back. Sorry, Jordy, we lost again. So GZIP is more efficient than Draco, so... Yes, indeed. So with GZIP, it only takes about 5% of the time to download and render a tile, as opposed to an uncompressed tile. And we think the reason for that is that Draco compresses individual 3D matches, and our buildings are geometrically relatively simple. I mean, they're complex for a building model, of course. But I mean, it's not as complex as a full digital terrain model, just because all these buildings are disconnected, and that is why we think that Draco works a little bit less good. And the next one. So we also created a WMTS renderer ourselves, just to serve as a base map for our building data, helping the users to orient themselves on the viewer. And the way it works is just we determine which tile is in the center of the camera, and using some kind of vision-growing algorithm, we load the tiles around it that are in view. And now next there is a little demo by Stelios. He will show the viewer to you, and after that we will have a conclusion and a quick overview of future developments. I'm going to try to find a way to do it without breaking everything. So you can join me if you can just go to 3Dbuff.net. This is the viewer basically, this is the whole page basically. So we have all the information there. I think about last on the review, they are going to talk about this tomorrow. But this is pretty much what you get. I'm not sure if the performance is slightly worse now than it used to be, but I think it's already fast enough compared to other implementations. What you can just move around and... You are supposed to be able to move around. Same as with any other viewer and it's already faster. You can change between some basic base layers that we decide for you. Data that that's cadastric provides for instance, you can change that. As I said before, we produce three different LODs, so you can switch between them. And I think you can tell how fast these 3D tabs and their library basically works. By switching the LODs, we just download the whole 3D tabs again and we start over, maybe not the best way to do so, but it still works fine. We have also a way to just navigate to any place. So this is my house for instance, and I'll do them. And yet again, you can see that supposedly are fast. You can move from one place to another. And then you can also select an object and we see highlighted. And on the bottom, you have two things. On the bottom left, you can see, first of all, this little P, which is where the place where you just double click or you simply tap with your phone. You can use this here and you will see the height of this point and also the slope of the surface that you hit. Apologies if you hear my top barking in the meantime. So this was something that was asked for instance, following architects to be there. And you can also easily see the attributes of the specific building that I chose. Again, with respect to the easy to use that we said before requirement, we only saw some things that we think they are really important. And for the rest, you can just click here and download the whole file. And you can download the city json geo package, all the j format and you can load it to another viewer and see more details about the building itself. You can directly go to the documentation. Again, this is all with respect to how we want to more embrace the data and help the user identify how the understand the attributes and how the understand the data themselves. And you can also report building. So if you hang around and you see something weird, you can just report something and say, you know, this building is weird or you can also report sometimes features if you want for the viewer. And yeah, that's pretty much the whole thing. So for some conclusions and future work. So for the conclusions, we concluded that what they basically building our own things from from scratch is not really that hard. It took us like six months to a year and that includes a lot of optimization like looking for the data, how to optimize the data structure and stuff over which I'm going to talk like a minute. And yeah, understanding the libraries and stuff. And we were not like the most proficient people with 3JS. We had some experience with the most proficient one. So the second one is that 3JS helps. But if you want to go like if you want to do basic stuff, you can easily do them to 3JS. But eventually when you want to have performance when you want to do things like as fast as you can, then you will have to understand some basic of 3D graphics like shaders and stuff. We had to at some point, these things as well. The last three points are about 3D times. First one is the 3D times not really as GIS as you think, although they were built as a 3D GIS format. I mean, it's not a fault of the format itself. It's supposed to be for more 3D data while our data in the nature of how they expand through the whole of the country, there are still two D lines. We just care about two D times one next to each other. And 3D times doesn't necessarily work this way. For instance, 3D times will never stop showing things because they are too far away. They will always load things as long as they're inside your 3D view. So that means that for instance, if you look further in the horizon, in theory you just download the whole of the country until then, right? You can just try to download the worst quality, but this is like massive data as well. So we had to find some hacks like stop at certain distance from the camera and things like that to make things work adequately for us. At least that's as long as you follow the format as the library that we used. The second thing about 3D times is the tiling matters. So we first had a flat hierarchy, just grid based, and it was okay. But as long as we incorporated this quad tree structure and we had a better hierarchy, so the traversal of the tree would be faster, then performance just boosted. So it's really important how you decide to form your 3D tiles, even if you just have empty times like we do, we don't really embrace certain aspects of 3D tiles like their own notion of LODs, because our LOD is different than theirs. And the third one is, well, as Jordy said, compression can still affect 3DM anyways, besides the fact that they are in the GLTF. And then for future developments, Jordy, if I may, say this, because you keep stuttering, so it's faster. Is that again? Are you stuttering? Yeah, okay. I'm going to go on with that because it's probably faster. Is that again? Yeah, just very quickly. So we want to have a better download service, because now we are using WMFWFS, and the way you go in the website also has downloadable tiles, not free to do it in all the times. For the baseMarch, we use WMTS. We would like to embrace vector tiles, but that would really need a lot of work. But the good product is that, yeah, maybe we could release that as its own library for people to use with 3JS. We would be interested to see how to implement 3D terrains, because now everything is basically flat, and it's just zero. Well, this is the Netherlands, not a big deal. Well, in other countries, that could change a lot, have you, do you think? We have experimented a bit with conditional formatting, so this is an example of conditional formatting, so showing different colors per values of the buildings. It doesn't work perfectly yet, but I think we can improve on that. We want to improve discoverability, like CO and stuff. And yeah, I think there are still some aspects that we can improve the performance even better, especially with respect to the base layers, as I said before. And yeah, last, we would like to probably release what we built, the WMTS renderer, so you can add your own WMTS tileset and incorporate it into 3JS for your own project, because now we have 3JS and then we have WMTS renderer, and anyone else could use that. Yeah, that's pretty much it, and you can find the repository and the code there. Yes, if you notice any bugs or any improvements, then please let us know the issues if you just try it out the viewer. That would be very useful. Okay, Jordi and Stadios, thank you very much for your talk. And we have Daniels who says, runs great on my desktop PC, so also it's nice and simple interface. We have one question for you to answer, and I'm going to read it for you guys. It says, if buildings get updated, how do you import the updated buildings into the model? Well, as I said, tomorrow there's going to be a presentation about the generation of the data, this was more about the dissemination of the viewing. But basically, we are releasing iterations once in a while, so we released that in the first version in March, and today actually we released the second iteration, we just recreate the whole thing from the top, we then update one building per time. And if Jordi wants to add something? Yeah, no, that's true. Okay, thank you. It will be very good to join the presentation tomorrow as well by our colleagues. Okay, thank you. Thank you very much, Jordi and Estetio. We have a little, I'll just say it from time, so we're going to go with the next talk, but first I want to thank you guys and to say to everyone who is here with us that there is an ice break. So you can see and talk with the Stadios, Jordi, Pyramin over there, and ask some questions, maybe, because we don't have time here now. So thank you very much. Thank you very much. Bye.
|
Can we visualize a data set of millions of buildings smoothly even on mobile devices? Turns out we can! 3D BAG is a data set containing all buildings in the Netherlands in 3D and we built a viewer to allow users to see it through their browser. This is how we utilized 3D Tiles and three.js to build a viewer from scratch with the main focus on efficiency and the data itself. This is a presentation about the 3D BAG web viewer, which allows for the visualization of 11 million buildings in the Netherlands. We built the viewer from scratch, using three.js and 3DTilesRendererJS for the consumption of the data. During the process, we had to implement our own WMS/WMTS viewer for three.js and to optimize the creation of 3D Tiles. The main focus was to provide a smooth experience to the user, focusing mainly on the efficient streaming of the data. We also added some basic measuring tools for buildings (height and slope of surface). The source code of the viewer is available here. All software used in the process is FOSS. We hope to make this an independent platform for others to distribute similar data. This project has received funding from the European Research Council (ERC) under the European Unions Horizon2020 Research & Innovation Programme (grant agreement no. 677312 UMnD: Urban modelling in higher dimensions). Authors and Affiliations – Ravi Peters (1)(2) Stelios Vitalis (1) Jordi van Liempt (1) (1) 3D geoinformation research group, TU Delft, the Netherlands (2) 3DGI, the Netherlands Track – Software Topic – Software/Project development Level – 1 - Principiants. No required specific knowledge is needed. Language of the Presentation – English
|
10.5446/57303 (DOI)
|
I'm Brandon. Me too. Hi, nice to meet you and nice you met the right stream. Yeah, here. That's perfect. Sorry for the technical issues we had and we directly go on because time is going on. I'd like to introduce you shortly, Brandon Collins from makepass.com. And yeah, Brandon has really long term experience as a GIS developer and worked with the Nature Convergence, NASA and the Bill and Linda Gates Foundation. So quite a lot of experience, I think. And today is a core developer of the Data Shader and Spokane Open Source Libraries. And he's going to talk about a multidisciplinary exploration of phosphor G. So I have to get your, maybe you share your slides so I can get them on the screen and then I disappear. And it's your stage. Great, Taylor, are you able to see my first slide here? Just the introduction slide. We see it. Perfect. Awesome. This is great. So my name is Brandon Collins. Really happy to be here at Phosphor G, Argentina. We're going to be talking about multidisciplinary nature of GIS and a couple of caveats about this presentation just to prep folks. This is a somewhat Python-centric presentation. We focus a lot on open source Python for GEO at MakePath. And the point of this presentation is to introduce you to some tools which are adjacent to, you know, spatial data science and GIS, which can augment your toolbox that you should know about. These are somewhat Python-centric just to reiterate that. We work a lot on raster analysis tools. So I'm also going to be doing a deep dive into some of the dependencies that we use that aren't specifically in the geospaces. But you should know about them because they can help you in your analysis. And in general, I'm hoping that folks will leave with some new libraries to check out and fresh tools as you go back to your daily work next week. Really, you know, it's really great to be able to present virtually. I am sorry that I can't be there in person. I actually do have a personal connection to Buenos Aires and the Argentina area, which is that I graduated high school in the town of Necochea, which is in the Provincia de Buenos Aires, from Colegio Nacional. So it's really special for me to be putting together two things here, which is one presenting to folks in Argentina and also around the world, and then also talking about my passion, which is around open source geo with Python. So my name is Brennan Collins. You can find me on GitHub. I've been working in open source Python tools for about the last decade. And you may have used some of the tools that we contribute to at MakePath, in particular DataShader and Bokeh. Also X-Ray Spatial is the tool that I'm going to be going deep into to look at some of the dependencies here that aren't specifically geo libraries, but really help to achieve our goals. I'm also the co-founder and principal at MakePath, which is a spatial data science firm based in Austin, Texas. And we help partners leverage open source software to solve complex challenges, both in the geospatial space and in general data science and machine learning. We are hiring right now. And so if you go over to MakePath.com for slash careers, you can see some of the open positions that we have and we hire from the open source community and we're a fully remote team. So this is not just US specific. If you do see something that's interesting there, I'd encourage you to go over also to the MakePath blog and see some of the projects that we're currently working on and some of the open source work that we're doing. For instance, a recent Bokeh release, which is a interactive visualization tool for Python. And one of the blog posts you would see on the MakePath blog is actually our history of open source GIS infographic. This infographic lists out some of the really influential open source libraries for Geo over the past 30 years. And as I look through a lot of these libraries, I see that some are not specifically Geo, even though they've had major impacts on Geo. Some of those include HDF and net CDF, NumPy, PANDA, some of these libraries that have a big influence on Geo, but didn't necessarily come from the GIS community or the geospatial community. Our infographic is itself open source, so if you see something that you think is missing or a new library that you'd like to put on there, please go over to GitHub and you can actually submit a poll request to change the infographic to add in additional content there. Today I'm going to be talking about multidisciplinary GIS in the context of X-ray Spatial, which is a toolbox for raster analysis. So we're taking what we're doing here is we're looking at general computer science and Python and saying, how can we take tools and apply them to Geo and name them in ways that Geo professionals would recognize? This project fell out or grew out of the Data Shader project, which is a non-Geo library, but its main purpose is for fast rasterization. And the intent of X-ray Spatial is to provide an extensible but also performant open source library for raster analytics and Python. And what we're trying to do is balance the ability of an individual analyst to extend the tools while also not losing the performance of tools like GDAL and Geos, which form an amazing foundation for Geo processing within many programming languages, not just Python. But in this case, what we're doing is we're trying to stay within Python, so we have a common analysis language, but then be able to extend the tools to scale to potentially larger problems. So scaling is central to what we're doing in X-ray Spatial for raster processing, and we're pulling ideas from adjacent disciplines to enable scaling for Geo. So there's two components that we see as important when we think of scaling, when we're taking our existing tools and making them work on larger problems. So there's really two sides to this. There's one which is finding ways to make algorithms faster, and that's what we refer to as vertical scaling, and then also being able to take algorithms and run them in multi-thread, multi-core, and multi-machine or cluster environments. And that's our horizontal scaling. And to achieve this for Geo within the X-ray Spatial context of raster processing, we're looking outside the group think of open source Geo and finding tools that span different use cases, but then taking them and applying them to Geo. As I mentioned, X-ray Spatial came out of the DataShader project. DataShader, while it has some really great Geo applications, is not Geo-specific. What it is is a staged rasterization pipeline that allows you to deal with problems like over-saturation and over-plotting when trying to visualize large amounts of data. We're looking at a Geo example here of plotting 300 million points. This is one point per person in the United States. But there aren't things like projections, there aren't functions specifically named for Geo professionals inside of DataShader. And that's really why X-ray Spatial was started, was to take concepts in scaling and rasterization and then add tools that Geo Spatial professionals would expect within a raster library. So to get started here in X-ray Spatial, you can scroll down, this is the GitHub page for X-ray Spatial, and you can see some of the areas that we want to address with four Geo Spatial professionals using tools that may not be within their normal purview. So classic 1D classification tools, vocal analytics, multi-spectral analysis and doing different band math on Landsat and Sentinel data, path finding and proximity tools, and then surface analytic tools, zonal tools and local tools. So this is a, as we think about the major areas of raster processing, we can map those onto different modules in X-ray Spatial, and we're building out different use cases, which are, in our opinion, niches that could be filled for Geo data analytics. So this includes being able to run on clusters and being able to run on modern hardware like NVIDIA GPUs. So as we look through some of the features inside of X-ray Spatial, we'll see that some tools are supported across clusters, which would be the Dask scenario, and we're going to talk a little bit about Dask, and then other tools are also supported on Kupi, which would be the GPU case. To be able to scale these Geo Spatial tools, we have tons of dependencies. We're building on top of some really interesting projects like NumPy and Numba, Dask and Pandas. These libraries are not Geo specific, but when we package them up together, we can make some really nice tools that speak to Geo professionals, and that's where X-ray Spatial comes in. Here in this graph, in yellow, we can see some other tools which aren't specific dependencies, but interoperate with X-ray Spatial via other dependencies. So primarily, I'm going to be talking about Numba, which was made for creating fast Python code, Dask for scaling out Python onto clusters, and then also Pandas for attribute management and a little history of NumPy. So NumPy came out of the biomedical imaging discipline and was created by Travis Oliphant at the Mayo Clinic, and what it gave us was a data structure for Python that was performant. So what we get out of NumPy is our multi-dimensional array, along with a set of universal functions on top of that array object. As we scroll down the page, we can see that NumPy comes into play in tons of different disciplines, and that means that by using NumPy as a dependency, we can pull a lot of different libraries in that may not, you know, you may not think of traditionally in the in the Geo realm, but we can learn from these tools and borrow from these tools and also gain, you know, being downstream from say NumPy enhancements that were added for signal processing, but we can be downstream of that and gain those benefits for Geo processing. There was a lot of different applications of NumPy, and SciPy grew out of NumPy to be domain-specific tools that use NumPy, and there are some spatial extensions here within the SciPy project, but it's another tool that is adjacent to Geo, which has impact on Geo. We have some really nice stats tools in here, and also there's plotting tools and things like even spatial indexing tools that get even closer to the Geo space. Folks from the finance domain came along and said, hey, I really like NumPy. What we need is to take these NumPy arrays and organize them into a data structure which data analysts are comfortable with. The classic data analysis environment would be something like Excel. So the Excel worksheet, which takes rows and columns of data and allows you to do analysis on them, is represented well within the PANDA's data frame. A lot was also borrowed from R. There's areas that Geo folks are good at and areas that finance people are good at. One area that the finance area is great at are date times. If you're a quantitative trader, you really care about getting date times, right? They can be surprisingly difficult. It's really nice to have a discipline like finance build a data structure with good date time support that can then be extended with a geometry field so that we can represent, say, something like a shape file or a Geo package in memory as a PANDA's data frame. So what it ends up being a PANDA's data frame is a set of NumPy arrays that are labeled as columns. And then Geo PANDAs does the great work of adding in a geometry column. So it's taking a tool that was originally made for biomedical imaging, NumPy, combining it with for business applications with PANDAs in the finance world, and then adding in a geometry column. And we have a really, really great project here, Geo PANDAs, which I'm sure that there's other presentations about and many of you may have used in the past. Within XrA space and within the larger Python ecosystem, NumBA has been a particularly helpful library because it can allow us to write performant Python code without the need to, say, delegate to, like, a C extension, or wrapping a C++ library. So NumBA is within XrA spatial as we think about raster operations like ViewShed and proximity analysis and zonal statistics, we can use a library like NumBA to speed up all of those operations, which would address the vertical scaling component of XrA spatial. So within XrA spatial, what we're doing is we're combining these tools like NumPy and PANDAs and NumBA to write functions specific for the Geo community. And similar to how PANDAs wrapped up NumPy arrays and added labels to them, XrA is a library which wraps NumPy arrays and gives us n-dimensional labeling. And this means we have a really nice container for storing Geo Spatial raster data in memory. So XrA allows us to have many different layers organized inside of an XrA dataset or an XrA data array, depending on what you're using. And it has a memory model that mirrors net CDF so that we can easily write this format to really, you know, open standards and interoperable standards like HDF and net CDF. So within XrA spatial to address scaling to clusters, we look at a tool called DASC. And DASC will allow us to stay within Python but be able to use data structures which can scale across multi-machines and multiple cores. So in the same way that PANDAs wrapped up NumPy arrays and labeled them for the finance industry, DASC provides a DASC array and also a DASC data frame which mirror NumPy APIs and PANDAs APIs but partitions those data structures so that work can be done on multiple partitions at the same time. And that work could happen in a cluster environment where we have say a thousand different computers coordinated to work on a single problem. But it can also work on a single machine scenario where if you are say wanting to use all the cores on your machine, then DASC is a really good library to look at to be able to scale your Python code. So this is not specifically a Geo library but is one that can be applied to Geo to solve horizontal scaling issues. Now these two libraries, DASC and GeoPANDAs have now come together so I just wanted to also plug DASC GeoPANDAs which is a combination of the DASC library and the GeoPANDAs library to give us a GeoPANDAs Geo data frame. So that would be a data frame with a geometry column that can be partitioned and can be used in a multi-machine or multi-core context. So this is still an experimental library but it has some really nice features in it. And we decided to go ahead and use DASC GeoPANDAs to create a small project called Census Parquet. So within the United States we recently had a decennial census and that data has been coming out over the past maybe six weeks. And what Census Parquet is, is a library for processing census shape files into Parquet files. So as we scale data access is one of the areas that we need to scale for Geo and we can look outside of traditional Geo formats like shape files and Geo packages and look at Parquet. This is an example of using DASC GeoPANDAs to create Parquet files for Census 2020. So we've open sourced this now and folks can go over and grab this and create your Parquet files. Just as a primer, right, and this is not specific to Geo, but good data formats for scaling usually have kind of four different aspects and Parquet meets all of these aspects. So let's just mention them briefly, which are that we're looking for binary file formats that store our data by column so that we can read just sub selections of the data that also support interesting compression methods. So in this case we tend to use the snappy compression method just so that we can optimize for IO as opposed to disk space. And then there's a fourth thing that you can partition them so that we can take a Parquet file, we can put it in the cloud and we can read partitions directly from the cloud based on a spatial context or based on an attribute query. So another library that's adjacent to GIS would be Kupi. So Kupi implements the NumPi API, but on top of NVIDIA GPUs. So this is one of the dependencies we use inside of XRASpacial, not from Geo, but that helps us scale processing so that we can use modern hardware for compute intensive problems. So Kupi is a really interesting library which gives you a NumPi-like array and compatible syntax, but is allocated on top of a GPU so you can benefit from the performance increases there for compute intensive problems. We recently actually just today released a library called RTXPi, and what RTXPi does is it connects Python to CUDA ray tracing. It is being integrated into XRASpacial right now to give us accelerated viewshed so that we can do fast line of site calculations using more recent CUDA APIs. Check out RTXPi. And within our viewshed what we were able to do was we took the viewshed inside of XRASpacial, which is based on NumPi and has number optimizations in it. And using the GPU version we were able to get about a 300X improvement on doing viewshed analysis on a 2000 by 4000 size grid. So this is where the modern hardware really takes off where we have GPUs that were designed to specifically do ray tracing. We can take general purpose ray tracing and apply it to viewshed so that it can be used in geo applications without having to go too deep into CUDA APIs. So just back to XRASpacial, so the tools that I've been talking about here, which have been adjacent to GIS, but have been wrapped together inside of XRASpacial to provide GIS named functions for raster analysis. You can find examples at XRASpacial in the examples directory and here I'm looking at the user guide where we have a set of notebooks that's continually being added to and expanded, but address a lot of these geospatial areas using these tools which are adjacent to the geospatial space, but within Python. Really quick for those developers out there that are comfortable looking at code, I mean look at a quick example of using NUMBA. So here within XRASpacial this is the type of source code that you'd be able to read if you wanted all you wanted to do is just see how you can use NUMBA to speed up raster operations. You may want to take a look at the horn slope implementation. So here using NUMBA, we can take Python code for a slope calculation and this is targeting a CPU. And we can change the decorator in just some small modifications and we can have this code targeted GPU. Or we can wrap the code and we can have it address a DAS array and handle things like the edge cases of overlapping partitions to be able to scale this to say a DAS cluster or a out of core operation where you can't fit all of your data in memory, but you want to do a slope calculation. So all you do is you can clone the repo or you can install XRASpacial via PIP or Condu and then you have some CLI commands to copy out these examples and you can walk through the examples to see how to use XRASpacial. And then you can go deeper into the source code for XRASpacial to see how exactly we're using NUMBA and KOOPi and DAS to achieve these tools that we have here. Just a quick look at some of these notebooks that you can get started on. We're always looking for contributors to XRASpacial and I would just encourage everyone to get involved in an open source project if you're not already involved in one. If you're not a, you know, if you don't love coding or you're intimidated by coding, I'm intimidated by new open source projects that I started in. I usually start with documentation and testing and that can be really helpful to the project while you ramp up on the code base. I just wanted to mention again that we are hiring at MakePath and we're a global company. We are looking specifically for folks that have a passion for open source, which is why I say this again. This is a great, great community for that. So please feel free to reach out to me on GitHub or also by email here and with love chat sometime soon. But just wanted to thank everyone for participating and happy to take questions about the content here or about Nekochea or anything. But thank you guys. Thank you, Brandon, for your talk. Really great, great things. I noted some things because I think your talk really made me happy because probably I don't know whether you know, but we work in catching up on really similar problems with the company of Mondealis where, for instance, Marcus Nittler is involved and to put a grass GS behind a cloud based infrastructure. But yeah, I think it could have been better to have some colleagues of mine here. They probably could have asked better questions than I can do. But what really I'm interested is because we, as we said, we're working on this part of software, which is an open source community project now. It's called Actinia. And what is your experience? How is the acceptance from other companies or users when using your software stack? Are there many users? How or is it just based in your company? And that would be, I think, probably interesting for the audience. Yeah, we have certainly have users outside of MakePath. Some of these projects are very new and so they're still being developed. X-ray Spatial has not hit a 1.0 release yet, but we are getting close to that as we fill out our feature matrix to have kind of full cluster and GPU support for the functions that we're targeting. I love open source because we can connect with anybody over these tools. And companies are continually very open to hearing presentations about free tools that they can use and benefit from. So I find that open source is really what ties MakePath together because we can connect to all these different disciplines and different companies on what is like somewhat of a neutral ground of all, you know, committing to projects with open licenses. And we're constantly learning from other companies and just really thankful to be part of the Python community and also part of the larger GIS for data science. Okay, Bränden, I think get your kudos in the chat and thank you very much for your presentation. Due to the schedule, we have half an hour break now, so I would just put everything down for half an hour and afterwards we hear the last presentation in this session, which is from Jennifer Bailey. But, yeah, as mentioned, we have a break half an hour now and we see you in a minute or in 30 minutes. Thank you very much. Thank you, Bränden.
|
Python for Open Source GIS is multidisciplinary by nature and continually learning from adjacent disciplines. This talk will highlight how tools from the PyData community are augmenting the toolboxes of geospatial professionals. You will also come away with a sampling of key open source GIS tools that are being used across multiple disciplines. Description: The general format of the talk will be as follows: General overview of the multidisciplinary nature of FOSS tools Highlight of some recent interesting applications using FOSS Quick overview of some key libraries that are enhancing the multidisciplinary nature of FOSS tools Authors and Affiliations – Collins, Brendan (1) (1) makepath, U.S. Track – Use cases & applications Topic – Data visualization: spatial analysis, manipulation and visualization Level – 1 - Principiants. No required specific knowledge is needed. Language of the Presentation – English
|
10.5446/57304 (DOI)
|
Hello, everybody. I'm going to start welcoming everybody back to our next final sessions for the afternoon of Earth Observations, Group for Earth Observations. And I'm very pleased to welcome next on stage, Nana. And I see that she's already got her slides ready for us. Welcome, Nana. Give everybody a moment to settle back into your chairs behind your computers. Nana is a GO AI lead and machine learning engineer at Development Seed and she has a PhD in ecological economics and has applied geospatial analysis and satellite imagery processing in her various academic work. And before this, she was a research scientist in quantitative ecology, a very, very background and bringing to us today a talk on AI accelerated human in the loop, the context of schools and land use and land cover mapping for climate actions. Really looking forward to your talk and I hand over to you from here. Thank you, Margarita. Can everyone here meet well? Yes, you can be heard very well. We can see your slides as well. Awesome. I will start then. Hi, everyone. My name is Nana Yi, currently machine learning at Development Seed. We are a small team working on a lot of like mission driven high impact projects. And today I am going to be touching two things. You know, like when you see the school and land cover mapping, like why are they all for climate action? Why are they correlated? So there is a story line behind it. Simply is we are working with the two also like mission driven organization, specifically UNICEF and Microsoft Planetary Computer. We have been working with UNICEF, particularly for school mapping since 2017. In fact, I was hired as a machine learning to come up with AI approach to map school from higher solution satellite imagery. And when we were talking about higher solutions satellite imagery, we were talking about 30 to 50 centimeter max work view imagery. You know, like imagine school structure looks very differently. Some school just, you know, naturally have a higher budget, bigger compass, some school sometimes just a shed, right? Like so apart from that, different culture may have their own unique architecture that can reflect on school building. Most school don't even have a clear school boundary. So you can imagine this kind of like a problem is not really good for object detection as well as semantic segmentation because the boundary is just like not that clear compared to other objects we see in day to day life. But if you're looking through enough school from 100 to 1000 to 5000, then you will start to have an opinion of like what makes school look look look as school from the overhead high resolution satellite imagery. So that's the bright side. Then we, you know, like we basically come up with a final image classification model. I will go into detail later. Another, the second organization work with is Microsoft planetary computer. The mission is, you know, to, to make data more accessible, but data analysis products more accessible for a sustainable future, which is, you know, a great for a great for sort of like climate and conservation science. But if we're talking about like land cover mapping, and that is what we've been working with, with Microsoft team, that you know, like if you ask five researcher or scientists about how they define forest, they might come up with a different definition depends on the research geolocation climate tree cover, or whether human disturbance or not. Right. So land cover sometimes can be very challenging to work with. So that's why we're bringing school land cover, we're not working directly toward climate actions, but our partners are. This is sort of like a one too many. A problem, I would say one is for school, right, like we only want to have one AI accelerate it workflow, which is, you know, finally, a school classification model that work in one country, we want to want it to be transportable. And you know, like keep improving in another country. This back they might look very different. For the content wise, for land class, land cover map, one AI accelerate works platform. You know, we need to get user of freedom to define what the land class land cover plus by the own application. And this is very hard problem as well. So this is the time like when human in the loop come to the pictures, when you're looking at, you know, like AI transportable from one geolocation to another geolocation and from one user to another user. And you know, like, AI is not perfect, right, like when you talk about it, honestly, they will need a lot of like high quality training that is that and you really realize on data, how much data you can provide. But on the other side, human, human manual annotation and data generation can be very tedious, can be very time consuming. So this is the sweet spot where the human in the loop come into the spotlight that, you know, like ways human high quality human inputs through the human in the loop, or sometimes we call it active learning that model can improve the performance through time. So that is the sweet situation we want to be in. And in this case is, you know, scaling AI to map every school on the planet, which is the AI case one, always UNICEF, we've been working on, just like give you a sense of like, picture of what we consider as a school, right. So from this image to basically a few screenshots, you can have a brief impression of like school can look very different from the satellite imagery, right. Like this is like the chips chips or tile we're used to to run like classification model in our case. So schools sometimes have a clear sport field and they pay sometimes they're just like bear earth school like have a different shape of building L, U, O, I varies. School in urban like look very kind of like expensive look school in rural area might just like a random bigger building size compared to surrounding residential buildings. And you know, the bear earth around it again, like that is the worst that playgrounds coming. So, you know, like, one way kind of like going through all the schools available in one given country. This is the school tile you can envision. This is just give you an example for two countries. One is in Asia. And another one is Kenza and Kastan in Central Asia, right. So, Nisha school basically like, you know, like very orange or earthy color scenes because it just like, you know, like sitting probably like in the desert or last tree covered in Sahara Africa. And Kenza, Kastan on the other hand, you know, like the school like tend to have red, blue, white rooftop. They are bigger building compared to Nisha country. And the landscape again is like very strong, very green to sort of like earthy look too. So, you can imagine now we have a school, but not school actually is way more diverse than school because, you know, like not school just including other things except school, right. That can be water body desert for urban or rural residential areas. Sometimes other critical infrastructure looks very similar to school, which including hospital, courthouse, marketplace, factory mall, those kind of like, right. So, we do need to be careful of like what we actually introduce to the binary image classification which is school and not school. So, we want to match not school, geo diversity as close to school as possible in that sense, like, you know, like we can exclude the surrounding or landscape confusion, but let model like more focus on the structure, school structure itself or school feature, that's what we call. So, you know, like this is just like a standard process that when we have, when we receive the school geolocation, we actually need to have a human go to look at like every single geolocation to match with the satellite imagery we want to go to train the model. Through the process, human will only human mapper or expert mapper will only keep the school like have very, you know, stand out or outstanding school features as we already mentioned, like playground, school buildings, school complex. And then from there, we create a tile which is like image chips and converted to, you know, like TF records in this context that because we use a TensorFlow model. I won't go into much detail because I have another very 30 minutes like technical presentation again tomorrow. But in this case, in case you can't make it tomorrow, we train like country model like specific to six countries, right? In Kenya, Sierra Leone, Kansas City, San, Rwanda, Nisha and home brush. But we also want to know like if any adjacent country combined as a regional model actually is going to perform better than a single country model. In this case, we actually try out Kenya plus Rwanda to train an East Africa regional model and by running model inference of prediction in Kenya with those two different models, we actually found out that, you know, like model, regional model, East Africa regional model specifically actually performed better than Kenya as country model. It just like showcase that, you know, when you allow model to learn very diverse or very diverse geo physically feature, right, like is actually help model to separate schools from not school basically. So give you a good sense of like how fast this kind of like AI accelerate process. Just a we just don't know like if you ask a group of map or go to map the whole country of the Kenya school map, how long it's going to take. But ways the AI running ahead of human and speed out predicted model, then human come in and validate the output. A single map or it can actually validate or map the whole whole country of Kenya within like 30 hours. So that is how fast we can go. And here is, you know, what you see the before and after map, right, like a green is the before sort of like, you know, like school map for this country and yellow dots are machine learning model predict at school, and then human come in and then validated that is actually a school. So we can see there yellow is a map is school, which is currently doesn't even on the map yet. But again, this need to be verified on the ground. I believe this work already be sent it to UNICEF country, country office or Ministry of Education to verify, like, you know, now we already have machine learning output school and we already have like expert map validated and are they actually school that is like up to the field data confirmation, I would say. So we have Kenya and Rwanda is East Africa, and we have Sierra Leone, Nisha is West Africa, we have Kinsakeh San, Ube, Ube San in Central Asia, and now we have like Ghana in West Africa, Honduras, Ghana, you see that it's all like yellow because we just didn't have like school map in Ghana at all. So if you want to see the comparison, there's one like just a very small experiment. You can imagine now like Google publish the whole continent of open building for Africa. We can we can assume that you know school is within all these buildings, right? So if you ask human to use Google open building as a guide to looking for school in a small AOI test AOI in Nisha, we're looking at like around like 600 school that human will spend like 61 hours to begin through this area and to find those 600 school, but with AI or salary school classifier as a guide, you know, you can easily like narrow down the searching space a lot. But at the same time, that's what we're saying like AI is not perfect, which is you know, you probably going to end up like hitting like 70% of school. That's a even way of confidence like a lot of like unmet school way filter out and present can can be like a real school on the ground. But you know, like at the same time, because the model just like was trained very distinctive or very standard features. So that could possibly be a drawback as well. Like we might miss a lot of school. So I'm not going to emphasize why like line cover map is very significant, you know, like accurate, accessible on cover map, essential for conservation climate research and environmental planning. And we have a lot of like line cover map available to us already one or two really like stand out to me. Myself is one is European Space Agency CCI publish every one or two years published like 10 meter resolution line cover map like globally. And another one is just like recently come out is collaboration between E3 and Microsoft planetary computer again, they were able to map a global line cover map 10 meter resolution within a week. Right that is like really exciting. But what is different from like, you know, like why we have another line cover mapping. So this is actually a platform is, you know, like basically AI salary line cover mapping platform on the browser. What you need to do is you just like go searching curl, locking with your Google. And then there's already a few tools starter model already trained for you. There's a full class nine class line line use currently only available in US. What you after you select the model, you only need to hit run model. So your any tile you select in your AI just send back to a cluster of GPU at the back end and run the model prediction with the starter model. And it was feed out the result. So it can be as fast as this real time prediction. So you will have a line cover mapping is not only like this. You can actually, you know, like if you are not happy with these classes, you can add new classes to your line cover mapping, you can if you care more about like, you know, like just build up classes, because you're doing urban planning, you can go after you run the inference, you can go to basically like draw a new training data set and you can run the inference after retraining and run inference and get the line cover map. And until you know, you can go as many retraining session as possible until you satisfy with the result. You can export it as GeoTiff or you can like keep it in the browser and next time you're locking the model and the result still going to be there for you. So that is sort of like a customized give user a freedom to basically play with the model already being trained for you. So, you know, like you can, you can, you can, you can reuse it over and over again. This is just like showcase how model can improve performance through time. If you care about like certain classes, you know, like by just like create a training dataset by yourself, if we're finding or retraining it the model, you will see the improvement through time. So that is basically my point of, you know, like one too many. One is we want this model are transportable activity, learn through times and your location. Or we want to give user freedom to customize the classes that care about one too many issue. And then these kind of like, you know, like a salary approach can improve through time with human in the loop process. So all of these are pretty critical for climate action, I would say, more active, more technical walkthrough for those two use cases. Tomorrow I'm going to talk about school mapping with UNICEF. And on Friday, my colleague, Martha and Caleb are going to talk about the line cover mapping with Microsoft. Yeah, so we're hiring come to work with us if you care as much as you know, making impacts in the space. Keep, keep, you know, connect with us through Twitter or GitHub or LinkedIn. That's it. Welcome any questions you have. Fantastic. Thank you so much. And it's great to hear that you've got two, two talks coming up to take a deeper dive so that people can continue to find out more. I would you have a few questions for you from the audience. We've got two questions about schools to start with. The first is about what the best global data set for school locations is that you are aware of. And the person thinking particularly of use in disaster response and earthquake content. Yeah, this is such a good question. Yeah, so I unfortunately, if you want to start to work with school data, the best source to go to go for now is open street maps. You can query like school data from there. We work with UNICEF. I believe they are working with, you know, Ministry of Education at a country level trying to open source the data sets but currently are not available. So open street map are the best place to start with global wise. Excellent. Thank you. And the other question about schools is actually on the side of the AI and how does the AI handle it when schools, for example, in regions where there's seasonal snow, handle these kinds of seasonality issues? Oh, that's another good question. So, you know, like for school, right, like if we are thinking about this is the kind of like building, you know, like similar to highway, it's not like moving through time. So if you think that we don't really need to have like any temporal information to train a model, which is you can actually use a base map, right, like similar to what you see from Google, Google, Google map. A lot of like commercial company like Maxar Planet, like they provide like base map. So you can use like three meter or one meter or like high resolution. I said training that is that as long as you can recognize school feature, you know, school complex from the satellite. So you just use the base layer, basically. Interesting. And we also have two offers of help along with questions. The first is the first is from Ghana asking about where they could find a repository of validated data to have access to the data as he would like to offer to help with mapping the data from Ghana. Oh, that's that's awesome. I will, if you don't mind, send me an email or connect me Twitter or, you know, like other platform, I would love to like connect you with UNICEF. They are actually building a platform, basically having like yes and no platform. So feeding user a random image chips and whether the school, so you know, like by crossing sourcing this, you can also like see a lot of map. Another efforts like they're doing also is like just like gathering cloud sourcing efforts to map a certain area as through map camping. And another member of the audience is mentioning the ML training data set and that it would be nice to contribute with your school ML training dating set and the Radiant ML hub. And they're hosting open ML training data sets. Oh, that's excellent. Yes. Need to kind of like have a broader discussion with UNICEF. I think that is like sort of like under the leader. But I'm not promising because you know, it's up to UNICEF because they do have like a lot of like regulation through different countries. So. Excellent. And I have a final question of my own if I may, and that's whether you've you have any kind of integrations with OpenStreetMap that allows that allows people to make their own ground truth and contributions or data or data pointing contributions to the underlying data set. That's a good one too. I believe that's what UNICEF are trying to do. Basically, but probably like not contribute to OpenStreetMap because OpenStreet also have their own lessons where like you can't map. Basically, you can't dump the data directly to OpenStreetMap yet. That sense, you know, like UNICEF do have their own system that they have human to validate or processing the training data set and look back to training a model and then have a human validation through the system. And as long as they get, you know, passed from the country to share the data, I believe that is eventually going to end up like in the OpenSource database. But yeah, to be honest, like it's it's it's very unclear to me so far. Well, lots of offers of collaboration. So that's that's fantastic. And of course, the heart of it is something close to many people's hearts. Thank you very much. I'm going to invite our next speaker to the stage, Lavia, but we have three minutes before the session actually starts. So I'll take a moment to see if we have that we haven't had shows. I'll be joined as I can say he's following from the background. See if there's any other questions that come up. The organizers have asked that I start sharp at the top of the hour. So anybody that's joining us exactly for Flavia session does not miss a word. Flavia, would you like to get your I see that you've got your slides not up yet. Would you like to get your slides ready for us? We'll add those to the screen. Thank you very much again, Nana. Thank you.
|
Al Accelerated Human-in-the-Loop schools and land use and land cover mapping for climate actions AI isn’t perfect when it comes to learning about complex satellite imagery and real-world features. On the other hand, only relying on humans to map complex features and objects is too tedious and slow. AI accelerated human-in-the-loop methods provide new approaches to quickly create map objects and features for climate actions with scalable cloud computing power and growing EO data. At Development Seed, we’ve been proudly working with two partners, UNICEF and Microsoft Planetary Computer, to bring AI accelerated human-in-the-loop methods to the hands of policymakers, scientists, and mappers for SGD and climate actions. In this talk, we would like to present: What are AI accelerated human-in-the-loop methods for SDG and climate action? How can we leverage the scalable methods in the era of growing EO data and cloud computing? How fast and scalable we can create accurate school and LULC maps for policymakers, scientists, and mappers. Please see the abstract above Authors and Affiliations – Development Seed Track – Open data Topic – FOSS4G implementations in strategic application domains: land management, crisis/disaster response, smart cities, population mapping, climate change, ocean and marine monitoring, etc. Level – 1 - Principiants. No required specific knowledge is needed. Language of the Presentation – English
|
10.5446/57305 (DOI)
|
And for our first speaker for today, let me introduce you to her, who will bring her in in our live broadcast. So our first session today will be an introduction to the Open Access High Resolution Tropical Force Data Program by Ms. Charlotte Bishop. Charlotte is a senior project manager at KSAT with over 15 years experience in remote sensing, primarily focusing on land applications and optical satellite data. She is also the project manager for the NICFI Tropical Force Data Program, a program designed to provide open access to high resolution satellite data. Her presentation today will share how the program works, what it has achieved, and importantly, how you can start to use it. So let's all give a warm welcome to Charlotte, hello Charlotte. Welcome to live broadcast and take it away. Thank you very much Frances, thank you for the introduction. Let me share my screen. Okay. Hopefully you can see my screen. Fantastic. It's great to be here today, so good morning, good afternoon, good evening, wherever you are in the world. It's a pleasure to be here today to talk about the NICFI satellite data program that's focused on tropical forests. And I'm going to tell you a little bit more about what that is, but I want to start by giving a little bit of a preamble about why it came to be in the first place. And also some of the different ways that you can access this data using open tools and also more recently some of the cool things that we've added to the program to make it as functional and accessible to as many people as possible. So why tropical forests? Why did this program come about? So what I've added here is some statistics. So the land sequestered 30% of annual anthropogenic emissions and most of that is through forest capture. And deforestation of tropical forests is a large source of greenhouse gas emissions accounting for approximately 11% of global emissions. So really a huge amount of our emissions, greenhouse gas emissions are coming from deforestation of this precious tropical forest resource. And there really is no way of reaching sustainable development goals without halting the tropical forest deforestation and increasing forest deforestation. In 2020, just as an example, the tropics lost 4.2 million hectares of primary forests leading to carbon emissions equivalent to annual emissions of 570 million cars, which is considerably more cars than there are on the planet. So just to kind of put it into perspective really, what a big problem this is, as I know we're all very much aware of the challenges we have with climate change and the effect of greenhouse gases. But tropical forest player really quite an important role in helping support our planet and therefore of course not contributing to global emissions. So over the last few years, Earth observation has been helping researchers, decision makers understand more about the forest. And this is not a new thing. Landsat and other public data has been used for quite some time. But in 2014, Global Forest Watch launched. And I'll talk a little bit about that as we go through the programme because it's really a very important publicly available resource that offers satellite based data as well as other integration of layers from various different sources. So it's largely public information layers that are then made available. And it's really the first of its kind that provided this kind of service. Since then it's expanded to include near real time forest monitoring and data on the causes of forest loss and has incorporated hundreds of different contextual data sets. And Global Forest Watch has become a symbol for some transparency really and accountability in the forestry sector. It frames the conversations we have about forest and helps with the discussions we have and helps facilitate a really large number of users in terms of how they use and interact with satellite data. And this really was one of the key reasons why the Norwegian Ministry of Climate and Environment through NICFI and NICFI is Norway's International Climate and Forest Initiative funded this first ever Global Tropical Forest Programme initiative to enable users to access high resolution satellite data in a way that has never before been possible in any application area. So this really was an unparalleled offer that was being proposed by the Norwegian government and how could we enable as many users as possible by removing the barriers to access that many of them face, which is largely around costs of data, licenses that are associated with commercial data, and then of course really the understanding and knowledge and access to this type of data is very challenging. So this is where this programme came from. So through the investment from the Norwegian government, the contract was awarded to KSAAT. So KSAAT are the prime contract holders and just to give you a little note about KSAAT, that's the only thing I'll say about who we are, KSAAT is a satellite value added service provider and data distributor from a comprehensive range of optical and SAR systems. We are also the world's largest ground station provider as well. So we work very closely with a range of different satellite operators to provide them and help support their access to space and their satellite missions as they orbit our Earth. We work very closely with partners, our partners here, Planet and Airbus and also delighted to be able to work with them on this programme. So and I'll touch on what this data is as we go through, but Planet data really does encompass the majority of the offering that's available for this programme, more by virtue of the constellation that they have. And we also are really pleased to be able to include historical data from Airbus, all the way back to 2002. So a lot of the public data that has been used traditionally is 10 to 30 metre or lower spatial resolution than that for such monitoring. And now we're looking at higher spatial resolution data, five metres or better, that is available to help support this initiative. So before we go into more details about what the data is and what's being provided, I wanted to really touch on where we're providing this data. So this is not just selected areas in the tropical regions, this is the entire tropical region between 30 degrees north and south. So the image you see on your screen here, where you see an overlay of a map, that is a country or continent that is covered with this data. So it really is a vast area that's being covered with this mosaic data. So most of our layers available to the largest group of users are covering this entire area every month at a better than five metre spatial resolution. So this is about 45 million square kilometres of our global tropical forests. And we have really what underpins this programme is the primary purpose of this data set by the Norwegian government, which is to reduce and reverse the loss of tropical forests and using that to contribute to combating climate change, conserving biodiversity and of course facilitating sustainable development and support towards our sustainable development goals. So this really is something completely unprecedented in our commercial world of remote sensing. This is very new. What was being offered here is quite unique in terms of service offering. And now we're nearly a year into the programme. I'm really excited to be able to share some of the impact that the programme has already had on those already using it. But just before we get to that, I wanted to show a comparison just to show really what we're talking about when I mentioned difference in spatial resolution and detail that people now have by virtue of this programme. Only lots of people have used Landsat data, Centenel data to monitor the world's forests and obviously the regular cadence of both particularly Landsat and also Centenel over various regions make this a very suitable public resource that can be used for this monitoring. And this example here from Brazil just shows an example from Landsat data at 30 metres spatial resolution with some overlays of plots of areas of interest and then the red extracted areas are extracted areas that have come automatically as deforested areas as part of the algorithm that was run on this process. So in comparison to another image. If we look at the same area from the same timeframe from planet data, we get this is the level of detail that you're able to see. So we're able to resolve the field boundaries in a way that we're not able to do with Landsat data alone and therefore get more information and be able to quantify the amount of deforestation in a different way. So that's not to say that this replaces Landsat because of course the wide area coverage of Landsat is extremely useful and will remain so and the same with Centenel. So this provides additional validation data and also helps provide more robust modelling, helps in optimising the algorithms that are provided and run on this data to analyse land use and land classifications. So it does provide a significant advantage in terms of the level of information that can be extracted from the imagery. And I'll talk a little bit more about because of the cadence of the planet imagery, how we can help also reduce the amount of clouds in the imagery, which of course is a problem in tropical forest areas. But just before we get to that, I wanted to talk about the impact of the programme. So we have been live for nearly a year. In fact, the programme started just over one year ago and we have had really, it's kind of been a whirlwind of a year, I would say. We have 8,500 users signed up to use the programme. So this programme, as I mentioned, is free for anyone and that has been intentional to ensure we can enable as many people as possible to access the data and from as many different application sectors, different working sectors, so media to research, even to commercial companies as well, who are supporting programmes and NGOs and other groups in assisting with their deforestation activities. So the programme itself is covering 97 countries and we have 130 different countries registered to use the programme, so obviously organisations within those countries that are using the data. And we have over, in fact, well over 15.1 million tiles streamed. This was from our previous quarter, we were just waiting for this quarter's update, but I fully expect that will be probably more like 25 to 30 million tile streams from Planet Explorer, which is one of the ways you can access this data. And we're collecting various other statistics such as user stories from data. We have some great ones, I will show you some examples today of how people are using the data or how people are helping facilitate other users through tools that are familiar to them. So essentially how we can enable other providers and other users the opportunity to help enable their user groups, which is a big part of this programme and how we can work together to ensure as many people know about it and are able to access the data and know how to use it and what value it has to their application. So this slide, there's quite a lot of information on here, so I'll take a little bit of time just to explain what the data is that we're talking about for this programme. So you'll see the familiar map on the right hand side, which I shared towards the start. So this is just as a reminder of the coverage of these mosaics. So when we talk about mosaics, these visual and analysis-reading mosaics, every mosaic that is provided under this programme covers this entire area. So every month from September 2020 until the programme finishes, we will have a mosaic that covers exactly the view that you see on your screen. And likewise, we have an archival mosaic set as well that runs from December 2015 until August 2020 in a bi-annual rather than monthly cadence. But again, covering this whole region. But we also provide the data products in two different forms. So we provide the mosaics both in a visual form, so similar to what you're seeing on the screen, an optimised visual display, fantastic for visualisation and visual comparison between the base maps on different dates. And that's provided as a standard red, green, blue, natural colour image at 4.77 metres per pixel. So this is the overall spatial resolution of the mosaic products that are available to the widest group of users are 4.77 metres versus the 10 metre plus that had been used traditionally through the public sources. We then also have the analysis-ready surface reflectance mosaics. So these mosaics have been optimised for scientific analysis. They've been normalised. They've been configured to work very closely with Landsat and Central data so that they can be used in conjunction with those data sets. And therefore in the hope that this data could be plugged into existing workflows without with limited amendment needed to suit this data. But of course provide what we hope is additional advantage and detail to the analyses that you're undertaking. This data is the same 4.77 metre spatial resolution provided in four spectral bands and with the full dynamic range. So you have full control over the feature extraction and the analysis that you might do with this data versus the visual mosaic which is more like a composite product in comparison. And I haven't touched on this but this is an important point. We have different access levels to this data and I just wanted to explain a little bit about what we mean by that and how the access varies for the different groups of users. So for those accessing the data at level zero they likely don't realise that they're doing so. So level zero is our most open. There is no licence. It is a view only data that you will find in Global Forest Watch or map BMS also through the UN FAO SEPAL or collect Earth of Line tools. If you're just viewing the data it is in a level zero mode in that case and that is just the visual mosaic product. And as you mentioned the both the visual mosaic product and the analysis ready they are provided at the same time for that whole area. So in every case you will have the option of either of those. You don't have to make a choice as part of signing up to the programme. For the level one this is our core level for users. So where I mentioned we have eight and a half thousand users they are at this level one level. So the data products I just discussed they are the level one products that are available. So that gives you both of the visual and the analysis ready. This is a non-commercial licence but we don't want to preclude commercial companies from using the data. So there are some specific clauses. We have put some examples in our documentation that show how commercial companies can still make benefit from using this satellite data and particularly in support of companies or NGOs or other groups that are working towards the pursuit of deforestation goals. And with that data at that level and I'll mention towards the end where you can sign up to have that access you will have the ability to download, stream and make your own derived products. So that is and this is all free. I will reiterate that is completely free. You can sign up at our website to have that access and explore the data and the various other integrations that go with it. And I've added some examples on the right hand side of some of the different types of organisations that are already accessing the data. So we have media groups there. We have research. We have private companies and governments in different countries as well who are accessing the data. So there really is a very broad range and we don't want to limit anyone's access. So the whole point of this programme is to enable as many users as possible. So if you don't see yourself represented or you're not quite sure, I can also provide more information later in the presentation which will show where you can ask some more questions about this. Then we have a level two which I'm not going to focus on today and that's because this is a very limited number of users. We are talking tens of users versus the thousands that we see in level one and of course the public access at level zero. So this is different. This is a ministry led assignment of level two access which provides access to the planet underlying images as well as selected Airbus archive. So that's where the Airbus archive comes is part of the level two. So that is much more limited. I'm afraid. So the focus really of much of our outreach is on level one because that is our most used level of this programme. And we hope will facilitate really the majority of users to do what they would like to do with it. We also have some outreach partners. So I will mention later we have a very exciting RFP with Microsoft at the moment. I could also delighted to share the recent update about the availability of the level one data in Google Earth Engine. And we've also been working with Matt Box and Esri and many others as well. So just to give you a flavour of the breadth of this programme. And just kind of at the second part I guess of my presentation I wanted to just focus on some of the user stories that we have have already seen come through. So I've just got three or four just to show some very small examples of how people are already using the data. This was in fact one of our very first public case examples that we've received. This was from the Amazon Conservation Group where they were using the tools within Global Forest Watch to look at changes over this area in the Chirabakite National Park in Peru and using the monthly data to help quantify and map the extent of deforestation. So the image you see on the top is from October 2020 and the image you see on the bottom is from November 2020. And you can see a clear area of forest that has been cleared during that time. So this data allowed them to monitor the different areas of conservation that are of interest to them and their group and be able to more accurately identify where some of that deforestation is happening and to what level in comparison with the other tools within Global Forest Watch such as the GLAD Alerts. As we move on we have a really nice example from the Central African Forest Initiative. So six images on the right hand side, sorry the dates have slipped a little bit but it runs from 2018 up to 2020. So using the biannual analysis and then to the monthly data. And the CAFE Group is bringing together six of the Congo-based countries to work on a forest initiative and a forest activity that helps them understand more about the forest and also helps them find the tools to monitor and make changes to how they manage the forest. So they're using the satellite data to detect and classify these changes and what's causing these changes to allow them to then review those and take action where is necessary. We then have also been working with platform providers, so providers, different companies that have their own platforms that serve their own user groups who wanted to include the NICFI satellite data within this. Within the bounds of the license this is absolutely encouraged. If you have a platform there are ways that this can be done and we're very happy to talk about that with you. So SkyTruth is one of those examples where they have added the NICFI satellite data into their data stream along with the other public data sources that they use such as Planet and oh I'm sorry about the typo on that slide. Such as Sentinel and Landsat and their alerts tool which they have allows users to easily compare two different images and map those changes. They can be two of the same type of images. They can combine the Sentinel with the Planet for example and help to validate those changes and also integrate with other data. This is really useful for people who are used to using certain platforms and for them it's quite cumbersome to download data or do analysis in another tool that is less familiar to them. And finally an example from Collect Earth Online and through the Geodash tool that they have already that's been running for some time. The Geodash is allows users to confirm and verify degradation and what they do is use the Landsat and Sentinel archives and add spectral indices to look at the different changes along with the imagery that is used to make those indices and the time series that's generated. So what the NICFI level 1 data is used in this case is to help validate the changes observed in this lower spatial resolution data. So it's adding some extra value to the resource that Geodash provides. And then just a short note on different ways that people can access this data. I'll put on the final slide where you go to sign up. So it's planet.com slash NICFI is where you sign up for the program. On the left hand side is Planet's base maps viewer which includes all of the NICFI data sets you can select whether it's visual or analytic and you can download the data at level one to your own platform. We also have streaming possible versus Python and API integration of course. And through Planet's Explorer plugin in QGIS and ArcGIS you can also have the same access to the level one data in those tools as well. Two weeks ago we also launched the finally launch we've been very excited about this for some time the Google Earth integration. So you will now find all of the mosaics within hosted within Google Earth engine. So anyone who is a level one user who also has a Google Earth engine account we can link those two together and you will have access to Google Earth to the data within Google Earth engine which we know for many users who are processing large volumes of data this is really valuable. So we really look forward this only launched two weeks ago so we're really looking forward to seeing the benefit that this integration provides. And as we're talking about integration and just before wrapping up we have we're really pleased to be able to launch as part of Phosphor G this week a request for proposals in collaboration with GEO and also with Microsoft that will provide NICV data through Microsoft's planetary computer. So the request for proposals opened on Monday it runs for two months and you can find a more information at the link on the screen at earth observations.org and the winners of that program will receive funding from Microsoft in this case storage of the data and various technical support from Microsoft as well. You can find some more specific details about it but from a NICV project team perspective it's great to be able to have this NICV data also available within planetary computer and support the great work that Microsoft are doing and the outreach that they're doing with that tool. So I wanted to end on a final slide that shows where you can sign up a reminder of where you can sign up. We also have a range of user resources which you can actually find also linked from the sign up page. Those resources are available in five different languages including English Spanish and Portuguese. We also have various other tools depending on the level of knowledge you have of satellite data to help you and also help you understand the processing that's been applied to the satellite data. And importantly we also have a 24-7 help desk so if there are any questions that you have about how to use the satellite data for the NICV program or where to go or any questions on how to use it, general information about the program, we have a help desk there who are really happy to help with any questions that you have. So I do urge you to reach out if you're having any problems we'd be really glad to help. And I think with that I will say thank you very much. Yeah, thank you so much Charlotte for this really great presentation about this program which we've heard so many people in the Earth observation community say that this is really a game changer in terms of providing the availability of this temporal and spatial resolution of data. And just in the interest of time we do have to transition over to Brian in a few minutes but in the two minutes that we have left we have three questions for you. Their first question was awesome. Could you provide a little bit more details in terms of the Airbus data that you mentioned that will go back to 2002 and then related to that one. There was another question I believe from Professor Wu from University of Tennessee. It was about if you could also tell us a little bit more about the duration of the program and then lastly somebody had a technical question about whether or not the mosaics cover just forested areas or really it's all of that domain that you showed. So take it away. Great questions. Okay I'll try and remember. I'll start with the last one because that's a fresh one in my mind. So yes basically the image that I showed all of the countries in the image that I showed where you could see there was an image they are all covered. So predominantly that is all tropical forest countries. Of course some countries that are within 30 degrees north and south you may have seen that are greyed out for example Australia. Because that's and the reason for that is that the premise behind the program has been to really focus on those less economically developed countries where this type of data would be of a greater benefit than perhaps in other countries which also in the case of Australia and other regions have large areas of desert as well as forest. So trying to focus more directly on the tropical forest areas. In terms of duration of the program that's my fault I should have mentioned it. It is the initial length of the program is for two years but with the likely extension for four years in total. So we are expecting the end of the program to be 2024. And the last question was about Airbus data. So yes so this is selected Airbus data is made available to the level two users and that is spot five data back to 2002 and also selected spot six and seven images at the multi-spectral resolution so the six meter resolution between 2012 and 2015. So we're really filling the gap between spot five and the planet scope availability. Thank you so much Charlotte. Again you know really great presentation and we encourage everybody attending force 4G to you know to see the links that Charlotte provided and in the interest of time I will pass the mic back to Francis. So thank you so much. Thank you Charlotte and Emile.
|
Access to high resolution data to support sustainable development activities, particularly for conservation and deforestation has often been limited by barriers of cost and licensing. Yet the benefit of higher resolution data provides opportunities for improved reporting, monitoring changes or high cadence updates not afforded by public sources alone. This was one of the reasons the Norwegian Ministry of Climate and Environment through NICFI funded the Global tropical forest program initiative to, for the first time ever, enable users to access high resolution data without these usual barriers. The program focuses on the purpose of reducing and reversing the loss of tropical forests and is designed to be as broad as possible to ensure it is useful for as many groups as possible. This presentation will introduce the program, the datasets and the various open tools that can be used to explore the data through case studies and applications. Authors and Affiliations – Charlotte Bishop, KSAT Tara O'Shea, Planet Track – Transition to FOSS4G Topic – FOSS4G implementations in strategic application domains: land management, crisis/disaster response, smart cities, population mapping, climate change, ocean and marine monitoring, etc. Level – 1 - Principiants. No required specific knowledge is needed. Language of the Presentation – English
|
10.5446/57306 (DOI)
|
Hello everyone, welcome to first for G to the Concava throne. Following we will be having another talk by Icaipaya and John Duncan titled an open source special workflow to map the best landscapes in Pacific Island countries. I think that we are fine to stand now so I will put your slides. Go. Hi everyone, so I'm going to be talking today with my colleague Ahi who's based at the Ministry of Agriculture in the government of Tonga and I'm a GIS researcher at the University of Western Australia and we're going to be speaking about some work we've been doing through a project called livelihoods and landscapes which is focusing on developing and using geospatial applications within the context of managing Pacific Island agricultural landscapes under sort of a changing climate and with a focus on sustainability. This work is very much an applied research project where we're developing workflows, apps and tools to map agricultural landscapes and to be able to collect quite detailed information about farm management and farm condition and with a heavy emphasis on using open source software throughout this workflow. The focus is on being able to collect very detailed information in small scale agroforestry and mixed cropping systems that are prevalent across Pacific Island countries. And it's typical in these landscapes that the environmental resources and the various farms that are being operated that support livelihoods are distributed across landscapes in a mix of spatial patterns. And so having tools that allow you to capture this spatial detail and landscape use is really important to be able to guide effective landscape and farm management. So a quick overview for some of the context of this work. The key goal here is for the data collection workflows that we're using and developing to enhance existing stakeholder activities and align with their requirements. And in particular this is the work of the Ministry of Agriculture, Food and Forests in Tonga. And we've been using an agile and iterative software development process to design the workflow and to sort of align technical and operational requirements of it. And the aim of this method is to develop data collection and analysis tools that best fit the sort of geographic context where they're deployed, but also meet the needs of users. So I'm just going to hand over to my colleague Ahi now who's going to introduce Tonga's agricultural setting where most of this work has taken place. Thank you, John. Firstly, Malay and Lai, and those of you who haven't heard of Tonga, is one of the smallest islands located in the South Pacific. Now in Tonga, our work was based in one of the island groups called Vava'u. And tradition in Thailand is consistent mostly of wood crops such as yam, taro, kumara, etc. So major economic activity in Tonga is farming and fishing. Also our major export is wood crops. Our agriculture crops are more of semi-subsistent and the farm management is not the same as other regions. And there where we as in the Ministry of Agriculture are able to give the data to be able to get information from the plantation and to view the output to the communities. And to the next slide. In our own method for any service in the Ministry, it's more unreliable and based on estimation. They mainly provide hard copy paper. But when Mopachi AIS was introduced to us, it really helps the Ministry gain more advance of showing a reliable and more accurate data which we can store and retrieve from our therapist collection. And she's going to take two examples from this, this was the survey that we just done last year and later this year. Which is the Vanilla Survey. Vanilla is an important commercial crop in Vava'u. The purpose of the survey was to map the extent of Vanilla plantation which we obtain estimates of the number of Vanilla plants and we are under cultivation which we use these using the Mopachi AIS. And second example is the land utilization survey. This was a government funded project for farmers and MEPH provides food for farmers. And we checked the location where the farmers plowed the land. This data is important for the government to see the extent of farming that has been done in the short period of time. Okay, so I'm just going to quickly bring it back to the technical side of this work and talk about how we went about designing the data collection and farm mapping workflows that have been used in Tonga. And we've used a collaborative software development methodology through this process called ICT4D which stands for Information and Communications Technology for Development. And so right at the start of this project we conducted a needs assessment and this really consisted of lots of focus groups and interviews bringing together GIS folk, geospatial developers and landscape users such as farmers, private commercial enterprises, government officials. And the goal was to identify unmet needs for geospatial data and GIS applications within the context of managing landscapes in Pacific Island countries and in particular in Tonga. And from this information we identified the crop type maps, crop survey tools and applications and land cover and land use data and classifiers where high priority needs that were unmet and were also feasible to develop within the project constraints and the skill sets that the people working on the project. So having identified the needs we set out a series of requirements analysis tasks to pin down the functional and non-functional requirements of a software system that could be deployed in these agricultural landscapes and would also contribute to meeting the unmet needs that we'd identified and prioritised. And this requirements analysis was an iterative process that combined focus groups of use case modelling and discussing various user narratives and user stories and there's an example of one of those on the right hand side of these slides here. And these are still quite broad but helped us sort of identify what activities were going on in terms of using data and collecting data within the landscapes to avoid duplication and to identify activities that we can possibly enhance through the use of geospatial data collection or from actually using geospatial data to inform these activities. And then from that we set about building a range of prototype apps and then testing these out on farms and in various mock data analysis tasks and there's an example of one of these apps here just looking at exploring forest cover data. And then this process was very iterative so we refined it and repeated it at various stages. And so coming up to the present in terms of developing the sets of tools and a workflow for mapping farms. Once we'd settled on the range of functional and non-functional requirements to put together a system to implement farm mapping in Tonga's agricultural and agroforestry systems and also fits with the needs of the Ministry of Agriculture's data collection tasks. We came up with something that looks like what's on this slide here. And this is very much a high level. A review of the tools and the software that comprise the farm mapping workflow that we've been using. And it's a mix of applications that already exist like Qfield that we've identified worked well in this context through some field testing. And then when there was an applications of the shelf that met our needs we've developed software to meet these specific tasks. So this workflow starts with using Qfield with data collectors out in the field mapping farms. And then there's a fast API app that handles data syncing, quality checking and automates the processing of key variables and data layers. And this app also manages our data storage in Google Cloud. And then finally we've got a range of dashboards and browser based GIS and visualization tools that can be used for quick analysis of the data that's stored in the cloud and can be fed into a reporting and decision making. So I'm going to hand back to Ahi who's going to talk about some of the work that she's been doing using Qfield. Thank you, John. But continue on. So for us here in Tongan and when we're working with the survey using the mobile GIS which is Qfield, basically my team managed to collect the data by we have to plot every edge in the plantation in order to get the layout of the plot and save it. And when you save it, automatically the form is in there. So we just have to insert one of the varieties of every crops and all of the information in the particular plots which will be saved in the forms in Qfield. Next slide. The fact that this app is new to us and it's also useful. Vanilla is highly provided. These two examples of this is the output of our data, the result of our data that we get from using this technology. The fact that it is very new to us and it's useful and it's also bring out more accurate data for us to like present it to the communities and also to the government for creating reports in the ministry. We can find out how many vanilla plantations are managed, how many vanilla are newly planted. We estimate the volume of vanilla for harvesting. These are examples that we are using by Qfield. And also the crop survey. So basically our work is survey every year. Once a year that we do a survey on the vanilla and the crop survey which is the overall of the crops here in Tonga. In this survey which are all method based on estimation. And we are moving now to using Qfield which is very useful. We get to like estimate the layer of the mapping. For example in there we just have to identify the location in it. Not like before we just have to like guess what is the text alignment for this layer and this. But in Qfield we automatically declared with locations and all of that. And it's also important for us to have the data for our database especially when cycle comes so we can access the extent of the remaining crops are planning for. Okay back to you John. Thanks Ahi. So I guess with the Qfield it supports a range of different field mapping tasks as well. So there's the example of the vanilla plantation survey you can see on the left which is a very focused field data collection task targeting one crop and a small number of commercial plantations and then at the same time it allows for this kind of widespread data collection where across an island group or several island groups you can create a sort of water wall map of several thousand farms. So I'm going to speak a little bit about what happens after all that data is being collected in the field. And we often found there were situations where you have several data collectors and they'd have their data on their mobile devices and tablets and phones and they'd often be out collecting new data on multiple days throughout the duration of a project or for a particular data collection task. So there's a need to find a way to keep a record of each completed survey and also to be able to sync together multiple surveys from different data collectors and devices and perform a suite of data quality checks before the data is used by the Ministry of Agriculture to make decisions. So to support this kind of data processing and data management we've developed a fast API application so it's written in Python which is what Fast API uses and that sits on top of Google Cloud Storage and it provides a web form for data collectors to submit their completed surveys and then download clean forms to their devices. This operates in a slightly different mode to two-way syncing where you have multiple data collectors all pushing and pulling from the same GIS projects and database in the cloud and they will keep the same copy of the data on their devices that's synced up to a central copy. The goal with this app was to operate in a kind of census mode and minimize the amount of data that was kept on devices and for data collectors to use a clean project and form for each survey that was undertaken. So really for each farm that was mapped you start with a clean project and clean form. The app manages records of each submission and so it automates the data syncing and does some quality checking and also automates the computation of derived variables and key spatial layers. So it publishes the latest cut of the survey data which can be accessed via the Ministry of Agriculture's website on the dashboard and that's what this image on the right is showing. It's just showing the latest cut of the crop survey data for the islands of Vival and showing you the number of the carver crop plants in each field. It also provides some data layers that allow for a high resolution and detailed characterization of the agroforestry systems as well. So you can zoom in on each field on an island and look at what mix of crops are grown in that field, what area within the field is allocated to each crop, then how many plants of each crop there are in that field. And then it also provides an API for data cleaning so admin users can go in and tidy up and clean up data that's coming in from the field. So the final part of our workflow and the apps we've been developing is a suite of dashboard tools using R Shiny leaflet and data tables and a few other components. And the goal here was to put together a set of easy to use tools that allows non GIS experts to access the data that's being collected in the field and combine this field data with other GIS spatial layers, so census layers, administrative boundaries, other key data layers that are being used within the ministry and form a suite of GIS operations within a web browser. So it has four main functionalities, this dashboard app. The first is a set of admin mode, which you can sort of see a screen grab of in the top right, which allows a user to edit geometries and the attribute data associated with each geometry, and then sync these changes back to the Google cloud storage. And then tools for analyzing tabular data, so spatial and non spatial joins, creating summary tables and possibly creating new columns and layers with custom functions. And then some tools to style your own web map and interrogate the attributes of features. So you can go in and get different views of agroforestry systems to see the particular sort of problem that you're focusing on. And then similarly some chart building tools, as you can see in the bottom of the bottom image there. And the primary focus of this app was to make the GIS analysis and visualization as easy as possible and to get the detailed insights out of the data being collected using Q field and into sort of reporting decision making and landscape monitoring contexts within the ministry. I spend a second or two providing a couple of examples of some of the analysis and insights that's being gleaned from these tools. So as we said, we've been able to generate some spatially detailed view, sort of cropping systems and their arrangement across the landscape. So looking at what mix of crops are in what fields and then using this data as a precursor to lots of interesting questions around sustainability, nutritional diversity and land cover changes. This data also provides quite an accurate and detailed baseline that can be used to track changes through time. So even in the time frame that we've been collecting data, we've seen rapid shifts in land allocated to carbon cultivation in the past five years in some villages. And this is quite consequential for things such as reducing crop diversity and reducing the number of food crops in the ground as carvers not actually a food crop. And so this has implications for food security and food supply chains. And then also has environmental implications as carvers are destructive crop in terms of how it's harvested, where you pull the whole plants out of the ground, exposing the top soil to erosion as you're really after the roots. And there's a picture of a carver plant in the center of the screen there that you'd be pulling out the ground completely when you're harvesting. And then the map on the left shows fields that have been sort of newly converted to carver cropping systems in the last couple of years in that village on Boval. And then also the data that's been collected and some of the software tools has relevance to sort of commercial agriculture as well. So at a point in time, the Ministry of Agriculture can look on a web map to identify the number of plants of key commercial crops such as pineapples or vanilla and watermelons and identify where those crops are planted within the landscape. Okay, so to finish quickly, I'm going to pass back to Ahi who's going to talk about some work that she's been doing taking this data and information and discussing it with farmers. Well, just a short update. The farmers are happy to know that they can check the actual boundaries of the land using this technology from our survey. We found out that some of the farmers are using the napers to the lack of a physical boundary or marker. So they are happy to know where the other plantation and how much crop crops are out there. So thank you. John, back to you. So that's it from us. Just like to say thank you to everyone who's listening and also to say thanks to Asia who funded this research and the numerous people along the way who contributed their time and effort in helping us develop these tools and contributed to the data collection campaigns. And so, how do you do it? So how do I do it? I hear. Thank you very much for your talk. It was amazing and very interesting. Now we have some time for questions but it seems like we don't have any. So I don't know. Maybe you want to add something to the talk? We can finish now if you like. I think I covered everything that I'd wanted to say. I would say that for all of the software applications that we put together, they're published openly on GitHub and there are links to the repos on our slides. So there's lots of documentation in there and tutorials on how to use these tools as well. If you're interested in following up or using any of these applications, then that'll be a good place to head. All right. What can contact you in case they are interested? Pardon? Sorry? Where can they contact you in case they are interested or they want to, well, they can get into the repositories of course. Yeah. I think the best place to contact me would be... So you can get my contact details from the university profile. So if you Google me at the University of Western Australia, that'd be a good place to go. Or head to the GitHub repose and leave a comment there. We are back to re-manage them. All right. Perfect. Well, thank you, John, Ahi, for your talk and we will be seeing you around in the first project. See you soon. Thank you. Bye-bye. Bye.
|
In Pacific Island Countries, the environmental resources that support livelihoods are distributed across landscapes in a mix of spatial patterns. Capturing the spatial detail of landscape use is important to inform landscape management that is sensitive to these livelihood dependencies. Using information and communication technologies for development (ICT4D) and agile software development processes, a workflow was developed that comprises open-source geospatial software to map and monitor agricultural landscapes. This workflow was co-developed with the Vava’u branch of the Ministry of Agriculture, Food, Forests, and Fisheries (MAFF) of the Government of Tonga. The workflow consists of mobile GIS to map farms, web-applications to synchronise and store data, and spatial dashboards for data visualisation and analysis. Mobile geospatial data collection uses QField for intra-farm mapping of cropping practices and digital forms to record farm management attributes. A web application has been developed using Express and Python to support data syncing, automatically generating datasets for reporting on cropping practices and landscape conditions, and for secure data storage. A spatial dashboard, built using Shiny and Leaflet, allows non-GIS experts to easily query and visualise landscape data collected in the field and to use this data in landscape decision making. This workflow has been used by MAFF for an array of data collection and mapping campaigns. Example uses include: mapping the location of vanilla plantations under sub-optimal management condition; identifying where land was under-utilised or left fallow by farmer groups to spatially target fuel and cash resources to increase land under cultivation; and annual crop monitoring to generate island-wide coverage of intra-farm cropping practices to serve as baseline data to track agricultural change through time. This talk will discuss the software development process including: the needs assessment to identify and prioritise unmet needs for geospatial data and applications; requirements identification and analysis using use case modelling and rapid prototype development and testing; and refinement and deployment of the workflow for agricultural landscape monitoring on the island group of Vava’u. This talk will also elaborate on the implementations of the workflow, highlight lessons learnt through the development process, and highlight areas for future work and expansion. Authors and Affiliations – Duncan, John (1) Davies, Kevin (2) Saipaia, Ahi (3, 4) Varea, Renata (4) Vainikolo, Leody (3) Boruff, Bryan (1) Bruce, Eleanor (2) Wales, Nathan (3) (1) UWA School of Agriculture and Environment, The University of Western Australia, Australia (2) School of Geosciences, The University of Sydney, Sydney (3) Ministry of Agriculture, Food, Forests, and Fisheries, The Government of Tonga, The Kingdom of Tonga (4) Geography, Earth Science and Environment, The University of the South Pacific, Fiji Track – Use cases & applications Topic – FOSS4G implementations in strategic application domains: land management, crisis/disaster response, smart cities, population mapping, climate change, ocean and marine monitoring, etc. Level – 1 - Principiants. No required specific knowledge is needed. Language of the Presentation – English
|
10.5446/57601 (DOI)
|
The central point of my talk is important of functions which we use in our models. And the denominator is called the structural sensitivity. And the function is 25 minutes or not, but enough to cover everything in deep. But I'll mention just a very crucial point. Introduction. Let's consider a Heimstadt model. Heimstadt model N is a nutrient, P is phytoplankton of primary producer, and Z is the consumer. And this model is well known. And if we need to insert into this model the functional response of the predator. And definitely we have some data, but the problem is we can fit different functional responses to data. Some of them are slightly better, but overall I would say equally good. And as I mentioned we use three different formulations. But the problem is when we use different functional forms, so in terms of dynamics we may have stationary state, we can have a small limit cycle, or we can have a big limit cycle. And the problem is, you can say if you are close to a bifurcation point definitely, there's not anything surprising. But this kind of phenomenon happens across a wide range of practices. This is the problem. And we call this the structural sensitivity. And I want just to mention some recent, or mainly not that recent papers, where structural sensitivity was reported. And it's not just in a single model, so a lot of papers. And definitely this is a very important phenomenon. You can find structural sensitivity across a psychological model. And the idea was to make everything formal, to quantify. This not just taking different functions and comparing them, but to be rigorous. So a definition of structural sensitivity, but I think in the morning probably especially after the breakfast it's hard to detect all these mathematics in simple terms. If we consider some reference model, and we compare this reference model with some closed models. And if the dynamics is substantially different, it can be typologically different, or we may have a small limit cycle for one model, and quite large limit cycle for another one. So in this case we say that the model is sensitive, structural sensitive. But please do not confuse structural sensitivity with structural stability. As Arlen mentioned in his previous talk, the famous Orchov-Velsera model is structural unstable. So this is an extension of structural stability. The system may be structurally stable, but still sensitive. So what's the difference? We consider divisions, small divisions from a certain function, from a certain model, but small but finite. So because in biology we can't make our differences as small as possible. So this is already quite good accuracy I would say. And since we have our definition, the question is how to check our models, whether or not they're sensitive. Of course what we can do, you can consider, say you can take one function, and you can take another function, third function, many functions. And it may happen that, well, you don't have sensitivity, but definitely we can't check every single function on Earth. And even more so, their number is not countable, because they belong to functional space. And our idea with Matthew Adamson is somewhere in here. And our idea was to protect the functional space, which is infinite dimensional, into some space with finite dimensions. For instance, if we're interested in our stability of the equilibrium. So in this case, we can consider our stationary state as a parameter, and we consider all we need to find the Jacobian, then based on linearization. So we can definitely find out whether or not this equilibrium is stable or unstable. But still, this problem is non-local, because we can't consider arbitrary domain in this space. So it can be more than two dimensional space. So we need to take into account the constraints. So the model still should be closed to the original one. And just some examples, I'm not sure whether or not I have enough time to go into details, but let's consider a very famous, very simple predator prey model with the autistic growth and the functional response. And what we do, I find the equilibrium points. And then we consider different functional responses, but each functional response, say the blue one, is I guess, a calling type. So no, it's hyperbolic, sorry, yeah, it's hyperbolic tangent. So the idea is consider our base function as a reference, and we include all possible functions. And also we impose some constraints to derivatives. For instance, our function should pass through zero, our function should be always increasing, and we have constraints to the second derivative. So for instance, we can also do it using data. And the question is, if we take a certain point as an equilibrium point, as a parameter, we don't know this point. And we consider the slope. So this slope actually is included into the equation for the Jacobian. And the question is whether or not a certain function exists such that it passes through this point with a given slope. And if this function satisfies some constraints which we impose, if such a function exists, in this case, we shouldn't neglect this function because it's within the acceptable functions. And then what we do next for different base functions, for instance, just for this one, so we consider the space of density for prey and the derivative. So in this case, we have stable and unstable dynamics. So this is stable and this is unstable. And this is the whole bifurcation curve. And what's the blue region? You can see we have a lot of uncertainty. So it can be either stable or unstable. And this blue region corresponds actually to the parametric sensitivity analysis. It would just do our analysis of sensitivity just by changing those parameters but not changing the functions. So we will be within this small region. And this is more like the Blinky course. So I think it was Matthew's idea to include this picture. So we think that, OK, so there's no sensitivity in here. But we have a lot of sensitivity. And we consider many, many models and we found sensitivity. We check the sensitivity. But actually, it's not enough just to say, oh, we have some sensitivity. We need to quantify the sensitivity. And it was the very important step forward. So we introduce the degree of sensitivity saying that if this curve, the bifurcation curve passes through from the middle, so we have the maximum sensitivity. And we describe this as the degree of uncertainty. And we say we have some threshold. And certainty is maximum when it's 1. And for different models, for the same model, so this is the current capacity or any other parameter. So you can see that it's not just a single bifurcation value. So we have really a lot of uncertainty. And say this is 5% of uncertainty. And this method actually allows us to consider different, well, we will not miss any single function by proceeding this way. And how much time do I have? Maybe too fast. OK, good, good. Another example in terms of ideas. So let's consider another Fred's brain model. And this time the functional response is null. But this time is the so-called ratio dependent functional response. And the model is already in dimensionalized form. And in this case, if R is the logistic growth altogether, so this model is quite null. And a lot of papers actually have been published regarding this particular model. And those papers considered in all details the dynamics of this model, in particular, if say we change parameter, new, so it doesn't matter which parameter. Well, you can go back, but a certain parameter. So we have a cascade of bifurcations. So we have from a stable station, locally stable station state, we have a limit cycle, which is stable, and then we have extinction. But the question is, and the stability is lost by so-called supercritical hope bifurcation, as it's shown in here. But the question is, although all those respected people, they published nice papers and they considered this model in every detail. So is it that important that we have the logistic growth? Of course you can say, OK, if we add something like that, everything will be different. No, I'm not talking about these kind of things, these drastic things. So still, I'll show you, yeah, in a minute, an example of dataset. So this is per capita growth rate. And you can see still, well, this is for the logistic growth, but still we can fit some curves also quite well. And why should we disregard those curves? And our questions that we answered was whether or not in this system we can have another bifurcation, a subcritical hope bifurcation, where after when we lose our stability, we don't have any limit cycle. And the answer was actually yes. And probably I'll just skip all technicalities. But the question, well, the answer to the question, sorry, the answer to the question whether or not we can have a subcritical hope bifurcation in the same system with a growth which is similar to logistic growth, yes. So we consider different, well, we use different techniques. And the question is the idea was completely, I would say, different from any bifurcation analysis. So the question was can we estimate the probability of having a particular type of bifurcation ported? So not just a particular bifurcation, but this kind of bifurcation ported. Can we estimate the probability based on some constraints on data? And we answered positively, we used, well, consider some conditional probabilities. Again, I'll skip details if you are interested, we can talk later on. And we computed the probability to have a subcritical hope bifurcation, but if you are not for some personal reason, you are not interested in subcritical hope bifurcation, you may be interested in, say, some other bifurcations, like, back then, it's touch and it's bifurcated, it doesn't really matter, it's just an example. And that is, if we introduce some thresholds like that, say, certain, for some certain variable, and this is a maximal error we allowed. So if we want to be certain that we have a particular type of bifurcation in this model, so again, we don't specify the equation, but we are interested in particular type of bifurcation ported. So in this case we are, if epsilon is small, so some constraints in terms of the second derivative are small. So in this case, yeah, we are certain. Expected, it means that not really like expected number, but expected, we expect that, so we are doing well. Otherwise, uncertain. And as you can see, if we decrease or increase our error, so from this nice zone where everything is certain, very, very shortly we will get into the region where everything is uncertain. In this case, we can only talk about probabilities to have a particular type of bifurcation. And very briefly, the next direction related to structural sensitivity, I will just slide about this direction, is to somehow be able to estimate the sensitivity directly from data. Just think, if you have some data, it doesn't specify a particular function of response or growth rate, and you can check whether or not the model is sensitive, and also you can estimate the derivative uncertainty directly from data. The other thing is what we are currently working on with Alan. So that is, if we consider, say, things like a functional response, a growth rate, whatever, some function, so we shouldn't consider this function as a fixed function in terms of its form, because this function with time, it slightly changes, moves, and you can't describe this in terms of just parametric changes. So say, if we consider the function of response of some particular strain of some species, so in 10 years, this will be completely different function of response. And the idea was, say, based on, there is a relatively simple tritrophic model, by Hastings and Powell, so very famous model, by the way, which produces chaotic behavior, but in this case we focused on something different, on sensitivity. So the idea was, if our functions, the growth rate or functional responses, slightly change in time, so we also impose some constraints, so it's up to us to choose those constraints, but if those functions slightly change in time, and from experiments probably we will never see the difference in reality, what will happen in this case to this system? Well, this is an example of dynamics, so we changed at the same time the growth and the functional response, so it looks like a big mass probably, but if you zoom some particular part, say this part, this part, different part, so you have some change in behavior, for instance, this is very close to the stationary state, this looks like oscillations, and this part looks like chaotic behavior, and then here is that, when you zoom it, the observed behavior, suppose you have some data, and the dynamics of population density is irregular, so what's the mechanism of irregularity? Is it because of chaos or is it because of something different? So the idea is probably because of small variation of our functions, which are amplified due to structural sensitivity to produce this irregular behavior. And finally, we also considered the dependence on initial conditions, and surprisingly enough, and we estimated the largest loop would not have exponent, so the largest loop would not have exponent, surprisingly was negative, and you can see we have some deviation from close trajectories, but sometimes they are almost the same, and this is quite interesting, because overall, so we have the negative loop not exponent, but we may have some deviations, and also the other thing is, if we change just one function, and we fix another one, so the question is whether or not the model is equally sensitive to both functions, variation of both functions, and surprisingly, if we just change the growth rate, and changes are quite, as you can see, drastic, in terms of the overall influence on the dynamics, I would say it looks like limit cycle, slightly perturbed, but it's not that sensitive, so the sensitivity is different based on different functions. And as conclusions, probably I don't know how much time I have, but briefly, so we present rigorous methods of detecting structural sensitivity, and the analysis of sensitivity only based on random integrations can be misleading, so remember that famous horse, and so structural sensitivity is observed in many, many models, and I just provided you a list, but it's not exhaustive, because in the absence of the exact knowledge of model functions, we can think about constructing our parametric portraits using probabilistic methods, and finally, our ongoing work with Alon is about how structural sensitivity can explain, possibly, the observed irregular oscillations in nature. Thank you. Thank you very much. That's very interesting. I remember reading the paper by Fussmann and Glasius that you mentioned, where they had only two species and different functional responses, they compared, I think, three different functional responses, and they came to these three different conclusions, and my big question at the time was, okay, what then do we have in terms of being, what is the ability of predicting future system behavior, and since Alon mentioned that in his talk, what's your take on, what can we actually predict? Yeah, you asked a very important question. So the answer is, at least we need to make sure that we are not missing the sensitivity, because remember the famous horse, so we can say, oh, we're doing well, but actually we are not. But firstly, this is, I think, a new road, a new methodology to address this kind of question. But the question is, sometimes we don't have sensitivity, but if we do have sensitivity. So the question is, is it because our model, something is wrong with our model, and probably in nature, so if you, I know at least one example, so people within the chemist staff, they try to modify conditions, and so more or less they had more similar behavior. But their model predicted sensitivity, so in this case, probably something was wrong with the model. But the other thing is, again, maybe our model is just too simple, and we need to add one or two equations which control the shape of the functions themselves, depending on, because whether or not it's profitable for populations to have this kind of sensitivity, it's a big question, because firstly, you may have oscillations, you may have extinctions. In terms of predictions, yeah, so it can be something related, something wrong with our models. Or sometimes, if our models are true, so our predictability will be, in terms of, it's like a joke, so whether or not you have, what's the possibility to have a limit cycle, so it's 50%, because you may have a limit cycle, may not. So it's kind of, that sometimes is, seriously, this is very uncertain. We need to impose some constraints on our functional response, we need to measure something extra. Yes, yes, yeah. You probably have string, maybe, maybe. And the other thing I didn't mention was, we measured the level of exponent based on very small perturbation, and it was negative. But if the perturbation is small, what finite, it was positive. But you can say it's not a proper level of exponent in this case. But again, it's about interpretation. But we can discuss this later, we can have a look at this. I just wanted to comment that in modeling and thinking what right form to choose, it's also maybe very helpful to include the physical considerations. I'll give you an example. In the case of vegetation patterns in drylands, the modification to periodic solutions cannot occur from the Berser solution, because this would imply negative biomass values, which are unphysical. So it means that the modification diagram should involve a uniform vegetation solution bufficating from the Berser solution, and the periodic solution that bufficates from the uniform solution. Just by the fact that biomass cannot be negative. So this is just a remark that this should complement. In terms of constraints, we need to consider all this. But even with those constraints, we cover a lot of uncertainty. I had the impression that all, most of the examples you showed us, were pre-offer oscillations in food chains. I'm wondering, is the fundamental cause for the stark instability, the sense of stability, not the funny behavior of a lot of kavotera models, and there is similar, you may not ever know, for food chains, you get the same funny behavior that they call oscillators, arbitrary amplitude. Do you just see a reflection of this? I know at least one paper by T. O. Gross, where he considered, as far as I remember, a competition model. And they consider different mentality terms, and in that model, you also may have sensitivity. It's not just related to the particular structure of this model. So you don't know the fundamental explanation for it to happen? Depends on what you call the fundamental explanation. If we start from the dynamical system theory, I'll try to focus on different models, not just predator prey. But it's not, of course, it's kind of... I have a question about this general idea of starting with a function on the right-hand side and saying, I'm going to look at all possible functions which are a distance delta away from my function. So that's a distance in the functional interdimensional space, and the answer to the question will depend on the metric that you use. So let me say a bit about what metric you've chosen and why, and you know, will there be different answers, or maybe different? Yeah, different. When we talk about distances, we need to introduce the metric. So again, it's slightly flexible, so we choose based on a particular problem. So in this case, we just consider the difference in terms of our functions. And relative distance and also absolute distance. And also we can include, well, and as far as I remember, we included distances between the derivatives as well, so not just between functions. So do they have a name, this type of metric that you use, or you just design it? Well, it sounds like he's talking about, I'm guessing, like the supermorph is just C0, and then like the C1 or C2. Yeah, yeah, yeah, it turns out that if... But there's a lot of ways you can wake those, right? That's exactly what I'm saying. Right, there's each term, which is the distance of the derivatives, then you can say, well, I think there's some of those at a maximum. A lot, yeah, yeah. So sort of tailor your metric to the problem in a sense. As opposed to an extra step where we have to figure out, you know, what's the best way of measuring the distance from the problem? But again, when you say what's the best way, so from the biological point of view, it's all one thing, so I'm strong, the mathematical, so that's why it's flexible, which is kind of an advantage rather than disadvantage. So, from... Oh, sorry. Yes, a remark about the estimation of the up-and-off exponent, which you mentioned. I would think that would be really hard to estimate, statistically. When you have very, very little... From the time series, yes. But from the model of probably... Please, technical, we can still do that. But from the time series, because, well, in reality, you have time series. But then what was the confidence, what was the error? Yeah, but for us, the most important thing wasn't just the magnitude, but was the sign and more as a fact that you have a negative maximum. No, I understand. I understand your point. Just in terms of... But I'm suggesting that you shouldn't be worried, because it's just so hard to estimate that it's not very reliable. It's for absence. Go ahead and choose. From... Okay, I think the structural sensitivity was a question of limit cycle or not, or a special application or not, is a dramatic example. But for me, it encapsulates an even more general kind of thing about how we think about modeling. When learning modeling, we learn that we need to be careful that the dynamics or the quantitative predictions depend on the parameters that we choose. So we teach our students scenario analysis or bifurcation analysis, but really it is not about functional parametrization unless you do different kind of models. Model A was this kind of function, model B was this kind of function, and you model all of them and that gets quite complicated. So I like this idea of drawing attention to how we functionally parameterize assumptions in the model. And there were things like generalized modeling by... It should be here, I think. And we need these kind of ideas how we think about modeling, and I think that's an important point. Yes, Karin? Yeah, Karin. I have a question and it's maybe anybody in the room can answer. I'm going to face everyone, but it's a broader question. So when you showed examples of data and multiple functions looked like essentially equivalently fitting the data. People that work on statistical inference, and I'm not one of those people and that's why I have this question, but people that work on statistical inference will often model average on their multiple models that are essentially equally as well supported by the data. And what I was wondering is if you have a structurally sensitive system, does model averaging help or does it make the problem worse? I actually don't quite know how to put this idea of structural sensitivity together with the idea of model averaging. So my understanding is that you can say use one function, two functions, three functions, and so on, and well, you can take an average, but maybe in reality none of them is, well, mathematically is correct. So you can average, but instead it's probably better to consider the whole bunch of functions. And the other thing is, well, related to your question is, I know the other way by Gregor Fussman, where they tried to fit the function of response to base, they obtained some population dynamics in the chemist art, but they didn't know the function of response, and they tried to reconstruct the functional response from the population dynamics. So, and they, based on non-permanent regression, I guess, and they managed to reproduce almost the same dynamics, and they were quite excited. But I'm still unhappy about that. You probably ask me why. So, yeah, you can reproduce more as the curve can pass through every single point. But if you slightly change, do you believe that this will be exactly this functional response? If you slightly change it, it's going to be completely different in terms of dynamics. So whether, maybe something is wrong with the model, but in the chemist art, the dynamics was quite robust, logically. Yeah, so, yeah. So, kind of on the outside of the field, I realized a better prediction analogy, and I'm sure there are a lot of similar problems in that field, right? So, but in the prediction, the short term dynamics, I think the models do okay, but they don't really talk about the bioparcation. The question is, like, from, like, what you're going to talk about is the short term dynamics more robust to the structure of the model than the bioparcation point of view? Maybe, but it's not exactly the right tool to think about. So if you're talking about transient dynamics, it opens a new kind of dimension, an infinite dimension, I would say, because actually we work in terms of asymptotic dynamics. So stable and stable, limit sideboards. Otherwise, yeah, but actually you're right. So this kind of dynamics can also play an important role. I think, you know, in nonlinear dynamics, about 20, 25 years ago, there is a function, you know, basically a theory which might provide a more fundamental, kind of an explanation, more fundamental level of structural sensitivity. That is so-called unstable dimensionality. When you look at the phase structure, you look at different places, you will find that your little unstable dimension is different, changes from place to place. If that is a situation, then your model may not be able to produce anything real, anything useful. So that is well established in nonlinear dynamics, you know, about the United States. And I wonder if your model has this kind of behavior, so what unstable dimensionality? I need to check. It's also not just to say that you have different behaviors, it's also to quantify because for some small amount of functions, probably you can have some strange behavior, but for most of functions, well, you can introduce some measures. So for most of functions, it's robust. So it's also about quantification of uncertainty. So if there really occurs, you basically will have any kind of shadow in the shadowing of numerical trajectories. Any trajectory produced from the model will not be shadowed by any true trajectory on the system because of that unstable dimensionality. What is that? I don't know if you have any idea, this is just periodic. That's a nice question. I don't know if you know how you got this problem. We can talk this off to you. Can we have an example? I smell which has the lack of transferstality in this. It's a little different. You can look at those looking, sometimes you have a full on digital data. That's a tradition that you have to be in fact, it's just two or zero. So we have about two minutes left and I might give a problem. Just one question. I'm not yet with us.
|
When we construct mathematical models to represent a given real-world system, there is always a degree of uncertainty with regards to the model specification - whether with respect to the choice of parameters or to the choice of formulation of model functions. This can become a real problem in some cases, where choosing two different functions with close shapes in a model can result in substantially different model predictions. This phenomenon is known as structural sensitivity, and is a significant obstacle to improving the predictive power of models - particularly in fields where it is not possible to derive the functions suitable for representing system processes from theory or physical laws, such as the biological sciences. In this talk, I shall revisit the notion of structural sensitivity and propose a general approach to reveal structural sensitivity which is a far more powerful technique than the conventional approach consisting of fixing a particular functional form and varying its parameters. I will demonstrate that conventional methods based on variation of parameters alone will often miss structural sensitivity. I shall discuss the consequences that structural sensitivity and the resulting model uncertainty may have for the modelling of biological systems. In particular, it will be shown the concept of a 'concrete' bifurcation structure may no longer be relevant in the case of structural sensitivity, thus we can only describe bifurcations of completely deterministic systems with a certain probability. Finally, I will show that structural sensitivity can be a possible explanation of the observed irregularity of oscillations of population densities in nature. At the end, we will discuss the current challenges related to structural sensitivity in models and data.
|
10.5446/57603 (DOI)
|
Thank you very much. Mark, I'd like to start by thanking the organizers, so Andrew, Alan, and Mark for inviting me here. It's really great pleasure to be here and talk to you about rating views, critical transitions, or granted views, tipping points, and applications to ecology. We're talking about these applications and the meaning of this nonlinear phenomena in ecology. So to warm things up, I want to start with a general setting of the problem which goes beyond ecology. It's this open system, this is this blue system, with a certain number of degrees of freedom which are encoded into this vector x that evolves over time, and that can even be infinite dimensional. We will hear some talks later on about infinite dimensions. The system is open, meaning it's subject to external disturbances, and they are defined by this function lambda of t, which again can be a vector function that varies over time. Some will call it forcing, for example. So we have this x of t and lambda of t, and mathematical model usually has this simple form, which is very deceptive. It's an ordinary differential equation which is non-autonomous, meaning the right hand side depends explicitly on time. So the question that I'm going to ask about this system is of tipping points or critical transitions, and these are in layman terms, sudden and large, changes in the state of the system x, triggered by slow and small changes in this input lambda of t. And the major obstacle to the stability analysis of this problem is the absence of compact invariant sets such as equilibrium points or limit cycles or torii because of this exclusive time dependence. One way of looking at my talk is talking about applications, but also looking at ways to overcome this major obstacle. So the talk is organized about those four different points. I will demonstrate that rating used in the ecology, and we'll talk about the meaning of this, and this is work with my PhD student, Paul O'Keefe, from Cork. I will be talking about the definition of artipping and some concepts, namely thresholds and edge states. Some of them are new, some of them are borrowed from fluid mechanics and other different disciplines of nonlinear science, and this is work with Peter Ashwin, which is a friend from Exeter, my PhD student, Chan Zi, and Chris Jones from North Carolina here. And this is the main technique that we're going to use to address the problems called compactification, and I will finish with some rigorous criteria for artipping and some conclusions. So going to a simple ecological model, you know, we've seen this model already here before, this is just another version. It's a model introduced by Martin Schaeffer in 2008 and his collaborators, okay, it's a predator prey model, where you have productors, plants and herbivores, they say. So the first two terms for the plant equations is just logistic growth, and this is the functional response here, which represents grazing or foraging. Now this is a herbivore equation, so whatever is going to kind of grazing gives rise to growth of herbivore, and this is mortality, so they die at some rate M. So the two important parameters from my point of view in this model is this rate R, which is the maximum plant growth rate what they call, and I will be looking at the changes of this, I will really be looking at accelerations of the growth rate here, and the mortality rate, and again I will be looking at the changes in M. It's a nonlinear system obviously, but the key nonlinearity is a bit different from the typical functional response. So they use this special functional response, so the typical functional response, or type 3 functional response will be just this term, which is a monotone function that kind of saturates. Now they put this extra exponential factor, which causes this functional response to be non-monotone, and then actually decay at higher plant biomass P. So it's a strong rationale for this modification in their paper when you read it. So they give examples of rabbits in grasslands, when you have R being too large or increasing, you have more rainfall and it increases and the plants grow faster, they become either too tall for the rabbits to lunge on them, or they become too dense and then the herbivore will be afraid to approach dense vegetation, and the denser they are, the bigger the lion, the taller they are, there is less of grazing and then there is this negative feedback that gives you rise to this tipping point. The same in ecosystems, the rest of this also some ecosystems in lakes and in the sea where you have plankton and becomes too dense, it's also harder to penetrate. So these are the ecologists, but these are the examples they give, and from an ecologist's point of view, this is the mechanism that will give you this. So I will look at mathematical ways of describing and capturing this mechanism, maybe formalizing this a little bit. But this is the main point, this factor here, so BC here and this factor B, will be a very important parameter also. So before I go and start changing parameters over time, I want to look at this system and the extra non-linearity and see what it gives me for fixed in time what different parameters are and then. Just to get an idea of what are the structure of the face, space, stability and so on. I started with the simple equilibrium solutions, there are four of them, there is a trivial one at the origin, plant dominated, so there is some value for plant zero for herbiverse, there is herbiverse one and there is herbiverse two. Now this can be stable, this can also be stable, herbiverse one is always, so sorry, this is always unstable, these two can be stable or unstable. Now I highlighted the dependence on those parameters, here it's quite trivial, here is less trivial, so there is no closed formula for those. This is the result of something which we call single perturbations here, I just gave expansions and you can see that they depend on those parameters in a very interesting way. So as you start varying these parameters over time, the position of these equilibrium points will change in the face space. Now you can see a very strange change in dependence on this extra parameter B plus BC, there is one over. So when I set B plus BC to zero and I go to type three, then I have a singularity here. So now if I give it to Andrew, now this is an example where we have extreme, there is structural sensitivity, there is structural stability but there will be extreme sensitivity in this way, especially when B plus BC is close to zero, you start varying, you will have huge, huge changes. So that gives you a slightly different way of looking at those problems. And now as I vary R and M, the parameters in red, this equilibrium will disappear, appear, lose stability, gain stability and you can actually summarize it in a two dimensional bifurcation diagram, which is a classical bifurcation diagram and I have two examples. So this is R and parameter space. On the left is the typical situation for the classical predator prey system. On the right is the one which is modified with this non-monotone functionary response. So to get only three different types of behavior possible for B equals BC equals zero, okay. In region one, which is above the green transphysical bifurcation, I have a plan that dominated equilibrium. Regions two and eight, I have two equilibria, plan dominated and herby versus one, okay. And what happens if this boundary between two and eight, I have a super critical hub of bifurcation. The equilibrium loses stability and gives rise to stable coexistence or self-sustained stable oscillations, okay. Just a stable limit cycle is born and this is it. There is no more. Now if I take B plus BC, non-zero, but however small, this changes drastically. It's the nature of the signal perturbation problem. You suddenly get a lot of bifurcation curves but there's finally many, okay. There's a problem sometimes that's infinitely many and this is the typical bifurcation diagram which you get for the parameters that Skaffer considered in the paper, okay. So what you have is this transphysical bifurcation. Now it has this maximum then changes from super to subcritical, okay. Then you have a subtle node bifurcation interacting from this subtle node transcritical point moving to the right. It again changes from super to subcritical in the book of Patens type two bifurcation and then you have a homoclinic bifurcation and this turns into a subcritical homo bifurcation. So there's a lot of changes happening. You can do this bifurcation analysis so on and so forth but what I'm really going to talk about here is I'm going to vary this parameter over time, okay. So I'm going to, as I vary over time, you can think of tracing out the path in this two dimensional parameter space and it doesn't have to be two dimensional. You can think of any dimensional parameter space depending on how many parameters you have in your system. You're basically tracing out the path. This is what happens in real life as you're subject to this disturbance that's coming from the outside, okay. So you can take a path which is vertical, horizontal. This would be an easy thing for a student to do but you can take a more convenient path because usually different factors change at the same time, okay. So now we go to a system which is non-autonomous. These two guys depend on time and immediately I'm stuck. I cannot do anything of that sort, okay. Simply. So Martin Rasmussen is not here to defend himself but the point is that an equivalent of classical bifurcation theory for non-autonomous system really does not exist, okay. There's been a lot of advancements. People came up with very interesting concepts like pull-back attractors because they're not stupid concepts but they just don't always work for real life applications. They are not useful for those type of different points which I will describe, okay because they will not capture, they will not be a bifurcation of a pull-back attractor when you have a tipping. So you can't really use them, right. Now we need to do something to make progress, okay. And I'm going to solve a major problem that a lot of clever people don't know how to solve. So to make progress I'm going to look at the approximations of finite time disturbances with special functions that die out at positive and negative infinity, okay. So I'm going to look at those inputs lambda which is what I call them by asymptotically constant, okay. That means that the lambda goes to a constant in positive and negative infinity need not be the same and the way the approaches can be arbitrary, okay. But also that the relative goes to zero because this doesn't imply that and that doesn't imply this either, okay. So there's two assumptions and I also going to introduce the rate of epsilon which I'm going to change because I'm going to look at systems that are being paced by inputs that are very slow or faster and so on and so forth, okay. So that's just an assumption. It's a way to model finite time disturbances, okay. When I have this I can make some progress, okay. So let's go back to the verification diagram on the right, zoom in around the tip of the transcritical curve and choose a simple vertical path. So what I'm saying is that the mortality rate increases from this value to that value over time in this kind of tunch like fashion. Okay, so you can dice out it positive and negative infinity. So I'm just going to smooth shift one value to another, okay. And I'm crossing a dangerous bifurcation. So I should mention that in this bifurcation diagram there's a lot of bifurcations, you know some are dashed, not necessarily important, okay. Some are solid, also not necessarily important. But there are types of bifurcations that are called by engineers dangerous, okay. What it means is that there is a discontinuity in the branch of stable states or attractors, okay. If I go through the supercritical, sorry, supercritical central node bifurcation here, I have a branch of stable attractors that disappears at the bifurcation point there. And there's an unstable branch, okay. So that's what I call dangerous. Another example will be the subcritical hopf here, okay. It is also a dangerous bifurcation. I'll focus on this one. So basically if you start at this point P1, which is this, and then you vary M over time, I put two different things. The red is a solution to the non-autonomous system, but the dark curves are the positions of the autonomous equilibrium solutions, okay. So we call them moving equilibrium or quasi-static equilibrium. So at each value of time, I have a value of M and for that value of M, the autonomous system would be there, okay. So you can see because by M very slowly, I very closely track or at the about to be followed by the branch of stable equilibrium points, okay. And I end at this, get to that point which is a critical level of M defined by this bifurcation where the stable solution disappears. There's nothing else to track nearby and this is coincidentally and I unexpectedly goes to the other stable state which is in this case, plant only stable equilibrium. So that's a paradigm of a tipping point and it's actually quite trivial. So first of all, there's a critical level defined by a classical bifurcation and it's totally independent of how slowly you go past this critical level, you will always have this critical transition or a tipping point, okay. It can be completely understood in terms of classical bifurcation theory, okay. Nothing special here. Now let's choose a different path in this diagram. A path that doesn't cross any bifurcations whatsoever, okay. So that means that the limited path, so now I'm changing the plant growth rate R, okay, from P1 to some other values and the magnitude of the changes given by this delta, the speed is given by epsilon, okay. Now what could possibly go wrong? At every single instant in time, my equilibrium, number three, is stable, okay. So I do one shift at the low rate and then I see what the system does. That's a blue curve. The system follows the moving stable equilibrium, wobbles about a little bit, but then recovers and adapts to the change and stays close to it, okay. So we say the system tracks the moving stable equilibrium, okay. But there is no critical level, there's no classical bifurcation at all here. That's an important point. But then I increase the rate a little bit, okay. And what's going on? The system departs the equation from the equilibrium and doesn't recover. It transitions to the other stable state, which is a plant only situation, okay. This is a, if you will, go into the simple photo with the rabbits, the plants overgrow, the trees grow, there is no grass and the rabbits move out, okay. That's the situation here. In spite of not having any bifurcation, okay. So the question is what's going on here. So this is what we call a rate induced critical transition, okay. It's a generally non-autonomous instability. You can't describe or understand it in terms of classical bifurcation analysis, okay. But it's potentially very important for ecology, okay, also for climate science. Because when you think of a, it's a lot bifurcation, some kind of catastrophe. If you think of a horrible bifurcation, it's an onset of self-sustained oscillations. And if you think of this R-tipping, it's a failure to adapt to changing external conditions. And in the sense, and this is being used very often, in the sense that the stable state exists once the perturbation is gone. But somehow the system is not able to be there. The system has moved to a different stable state, okay. The original stable state still exists. So there's an option of being there, but the system fails to adapt to be there, moved elsewhere, okay. So how can we analyze this? There's a simple idea, okay. I'm going to use a very simple idea here, then generalize it later on, okay. But that's probably the most important point here. We call it Bayesian instability. So there are three ingredients here. One is the parameter path in the lambda parameter space, depending how many parameters you have. You know, you have a high dimensional space and you look at the paths in this space. This is P lambda. Look at the stable equilibrium along the path. It must bifurcate, okay, to start with, to keep things simple. Because you are looking at this interesting situation with a row, classical bifurcations. And you also look at the evolution of the Bayesian of attraction of this equilibrium along the path. So you have the path, equilibrium along the path, and it's based on attraction along the path, okay. And now I'm going to define something. I'm saying the stable equilibrium is based on the stable of the path, P lambda. If there are two points on the path, P1 and P2, such that the equilibrium at one position is outside of the Bayesian of attraction of the same equilibrium at the other position. Okay, so that's it. I'm going to have a picture of the next slide for people who prefer to see geometrically, okay. I can also define the Bayesian instability region in that parameter space. This is just a set of all points in the parameter space, such that E of P1 is not in the Bayesian of E of P2, okay. So that's the idea. Now this is a geometrical picture. So this is my bifurcation diagram. My path that doesn't cross any bifurcations. And there are three points, R negative, R star, R positive. This is the phase portrait for R negative, okay. I'm interested in this equilibrium point here E3. This is the stable coexistence of herbivores and plants. This is a plant only equilibrium, also stable. And this is the second herbivore which is unstable, okay. It's a saddle. It's stable manifold is the Bayesian boundary between two different Bayesians of attractions. One for this stable, the other from that state, okay. Now this is a disbalanced error negative. If I go to R star, the position of all these guys will change here in the shape of the manifold. And we've seen on the earlier slide that they depend on those parameters and they will change. Now this is what they are, okay. This is the new position of E3, new position of E2. This is new position of the threshold here or the stable manifold. But this is the old position of E3. This is this guy's new position, okay. It glances exactly on the boundary between the two Bayesians. Now if you go to R plus, again everything moves and now my original position of E3 happens to be in a different Bayesian attraction. So you can think if you move very, very slowly, you're going to adiabatically follow the movement of these guys. Stay very, very close and nothing happens. If you go too fast, let's say infinitely fast, okay, because that's easy. I start here at error negative and I abruptly change to R positive. It jumps straight right there. This is my initial condition. This is what I'm settled to. My diagram changes to this, but I don't have time or inertia to change instantaneously fast. I'm still there, okay. And out of the sudden, I find myself in the Bayesian attraction of the other equilibrium and I converge here, okay. So that tells me intuitively that should be at least one finite rate at which I'm going to switch, okay, from one to the other. But that can be very many. I can tip and go back to tracking, tip again, go back to tracking, or this I don't know, okay. It's something to investigate. Now at this point, you know, using this simple idea of Bayesian stability, I'm equipped to maybe add something to this classical bifurcation diagram. So this is exactly the same story, the same diagram, but what I shaded here is this written of Bayesian instability, okay. So this is going beyond classical bifurcations because this gray area tells me about the system being very sensitive to how fast you change or how slow you change parameters, okay, even though you don't cross any bifurcations. So if I go from the point P1, any path from P1 to any point which is gray, will give me Bayesian instability, okay. So if I start with P1, I'll get a different gray area here of Bayesian instability and it will be equally large. So it's very, very robust. For this particular one, this path, this path, that path will give me Bayesian instability, okay. You can see it's quite, quite robust. So now another interesting question is, what happens at that critical rate? And this is the, this is the situation from before. I said this has one extra solution. So this is below the critical rate, above the critical rate, but exactly at the critical rate, what happens? You are tracking the moving unstable equilibrium, which is rather surprising, okay. And you actually track it for infinite time. So this type of solutions, they are called, they have different names, okay. They can be called pullback attractors in non-autonomous system theory. They are called maximal canards in slow, fast system. This maximum canard actually tracks the unstable slow manifold for infinite time and so on and so forth, okay. So we don't have very good ways of, you know, working with these things, okay, or computing these things. So now what I'm going to do, I'm going to take this path and this path is two parameters. The rate of how fast I go from here to there and the magnitude of this shift. I can come here, there, okay. So I'm going to now do what I call it the R-tipping diagram, where I have the magnitude of the change and the rate, okay. And you see the system tracks outside and it is within that region. It's very simple and the idea of Bayesian stability, the bound of the Bayesian stability is spotted here. It gives you a very good estimate, okay. If you actually plot it not epsilon but the maximum of the rate of change of R, it will be almost a corner here. So it says, no, you have to cross Bayesian, Bayesian stability is, appears to be sufficient and necessary here, okay. Generally, we know it's necessary. So it's sufficient but not necessary. So in general, we can prove it. But in many, many cases, we will be both, okay. And then you know that your rate has to be fast enough to achieve this situation, okay. Now this can become more complicated. This is very trivial one. This is going to be example of a non-trivial one. I just ask it very simple questions and I will already look into this non-planetal modifications. Suppose I have a monotone shift that gives me tipping, okay. Can I reverse tipping or save the system by turning around? So now tipping is not as jumping off the cliff. You always have time to go back, okay. You always have more time to go back than in this situation and this are some diagrams, okay. So in Bayesian stability, this is tracking white, green is points of return, meaning that if you go one way, you will tip. But if you reverse at the same speed, you will save the system and red or pink is points of no return. If you go one way, you tip. If you reverse at the same speed, you will not save the system. The system will still do. Okay. So some interesting structures and if you extend the path even further past the homo clinic and hub bifurcations, it becomes really non-trivial. It kind of apparently disconnected regions of points of return, some very complicated points of no return and that has to do with different time scales, okay, with the delay close to the hub bifurcation and a lot of different effects. But the point I'm trying to make here, there's a whole area of looking at these things that have not been explored. People didn't even anticipate there would be instabilities, but there are, this is why I'm using instabilities, okay. And they can have very non-trivial instability diagrams, okay. We still don't know how to compute those or how to define them very well though, okay. So the key message from this system is that the classical bifurcation do not capture all the thinking phenomena, okay. You need an alternative framework and this is what we are working on. So this can be more mathematical. The main idea is to use autonomous dynamics and compact invariant sets of these systems, which are systems from infinity because at infinity, my input dies out at infinity. So at infinity, my system is autonomous, okay. So use the autonomous dynamics from infinity to explain non-autonomous instabilities in this non-autonomous system. So that's the main idea, okay. Now, it's good to have a look at what kind of objects would be useful for this writing do-stiffings from infinity, okay, or from the autonomous system. And we're going to generalize now the problem. I talk about this basing instability, okay. But the basing boundary between two basings of attraction is a simple thing you can think of, but the problem is more and more complicated, okay. So this is another example of the basing boundary between two different basings of attraction. On the one side, you go to one attractor, on the other side, go to the other attractor. But here's an example of a basing boundary that does not divide the phase space into different basings of attraction. So this is two basing boundary, this is one basing boundary. There's only one attractor. On this side of the boundary, you make this loop and come back to the attractor. On the other side, you also go to the attractor in a different way. So this basing boundary is that they're not separated into different basings of attractions. And we call them thresholds more generally. If you go to slow, fast systems, which I don't have here, we have something which people call quasi-thresholds, okay, that's even more complicated. So we're going to have a theory that encompasses everything, okay. But just for the example here, for this ecological example, I just focused on this type, which is kind of easier. So first of all, I'm going to define threshold as a kind of more mathematical, but I want to say it's an orientable, codimension one forward environment embedded manifold, okay. So I think it gets a bit complicated. All right, so I'm going to skip some things later on. But the thing is that, you know, you have two different sides of the thing and you have different dynamics on different sides, okay. And we also define an edge state, okay, which is basically there's an attractor within the threshold that we call an edge state. And that's an idea that we've taken from people working on before we get an axe. So these are the main ideas, okay. And then the typical way of dealing with the non-automous systems is just to basically augment the problem with an additional dimension you would unbound it, okay. What the combatification is, you amend it with additional dimension which is bounded. So instead of you use some s which is bounded, I'm going to maybe speed up a little bit here. So this is the whole idea you ask the next question with s and your s maps infinite time on the finite, so your g of t maps infinite time on the finite s interval, okay. Now you have to figure out what that is and say that's not a big problem. You can use something simple like that if your lambda decay is exponentially at infinity, for example, then you extend the vector field to positive negative infinity. You bring the infinities into the problem, okay. And the difficult point is to ensure that the system is sufficiently smooth at those points which you included at infinities, okay. Otherwise you won't be able to do and there are conditions for that, okay. So your combatification cannot be faster than the decay of the input and so on and so forth, okay. And now this is what happens when we lose at the non-automous problem, no compact invariant sets, no non-automous input and after compactification. So first of all positive and negative infinities become in flow invariant subspaces with equilibrium points and invariant sets and so on. So I have something that I can work with, okay. And I have now two attractors here, one and two. This space is unstable, this space is stable. So s only increases, okay. Now I should have quoted this, I don't have it, but what I've done really, I've encoded the non-automous input lambda of t into a geometrical object in an autonomous system. So embedded in that system, the stable manifold is not just one dimensional here, but it's really two dimensional, okay. If I change input, if I change the non-automous input, I'm changing the shape of this guy. So if I start here below and I go to one equilibrium and I increase the rate, the shape of this threshold deforms to the point when I go to the other side and I go to the other equilibrium. Okay, so that's the main idea here. Now, so this is basically three trajectories now for the compactified system. I started the unstable equilibrium in the negative infinity. Below the critical rate, I go to one attractor, above the critical rate, I go to the other attractor. And at the critical rate, I have a heteroplinic connection from this saddle to that saddle. So now I have a way of defining my regular stiffness heteroplinic ornates. And that can be done for more complicated thresholds and so on and so forth. It's not that things stopped somehow working. I'm not going to say a lot about how we prove it, but we have a very simple condition. You use the properties of the autonomous system to make statements about instabilities in the non-automous systems. I think it's a very nice result for scientists and kind of a very useful testable criteria. And maybe to summarize, I talk about classical modifications and regular tipping points and the limitations. Then I told you about this rate-induced tipping point, which is a different type of instability. Now, you've got to decide whether it's important in ecology or not. I see a lot of people talking in climate and piloting systems about no species not being able to adapt to the climatic changes and all this type of situation. And that's exactly what it captures. And the key technique was to compactify the system to transform this instability into heteroplinic connections, which allow you to do both numerics and prove some rigorous statements about these instabilities. So kind of mathematical ideas that could potentially help to understand the systems to be better. Thank you very much. These type of models have really hobbified locations. And before the hobbified locations, you have then oscillatory modes. Now, in this context, seasonality is an existing, in fact, you should take it into account. And this is a kind of periodic forcing that can pump those damped oscillatory modes. And this leads to early collapse before the tipping points, if you add periodic forcing. And my question is whether this approach can be understood as a basing instability of the stable limit cycle. It can be understood. So I took maybe like, there was a reference to the paper on the archive, and this is maybe 10% of what's in the paper, right? So what we look in the paper is typical situations close to certain node and certain hobbified locations, which are the only two generic bifurcations of equilibrium. Okay, all the other bifurcations of equilibrium are non-generic, and we'll unfold into one of these. Okay. And what happens is that you can have two situations. You can have an early collapse, okay, or you can have a delayed collapse, but you can have a collapse on the way back. So going forward, you only tip at some point, and then you want to rescue the situation, I'm going to reverse the trend, but even when you reverse the trend, you actually make the system more unstable, and you tip earlier because you reversed. So what you say does happen, actually more happens, okay? So the problem is that depending on the slope of the branch of equilibrium and the unstable limit cycles close to the subcritical hobbified location, you can have different situations. You can actually have an extra, you would have an extra, what is this? Extra original here, which we have in dark red, which is return tipping. Okay, you induce tipping on the way back. So that happens. Now seasonal forcing, we didn't include, okay, we just look at very simple two paradigms of the shift, one monotone, the other non-monotone. So going back and forth would be such, okay, we didn't include it, but you could definitely use all those ideas. This idea is true. The statement here says, it's kind of, I know it's a more general statement, but what it says here is this, if you have a basic instability, then there will be an input that traces out the path and gives our tipping, okay? And this input has to be of a certain form, okay, not all inputs will give you tipping, but you can put a product signal and it will give you tipping as well, if there is loss of basic instability on the path. I mean, quite an obvious chart, so. Yeah, I'm sure, so I won't ask what we did with that. And so I would argue that maybe this situation is very similar to intense classical force oscillator theory, because in a way, if you have a force system that's non-autonomous, where time is already compactified in many cases, because you force periodically, so you don't need to. But this is, but this is, there's no periodic here. This, there's no, there's no. Can I finish this? Yeah. Absolutely. I'm just making a point that there is no restriction to periodic whatsoever. Okay. I haven't finished the question, so I said, okay, for the periodic case, you don't need to compactify, but of course, if you have a non-periodic case, compactify by one point, by Alexander of low-actification. Maybe, maybe, no, not always. U dot equals one, you can always by one point compactification say, or by a low-actile punctification, you can take R, you bend it over, you have S1, so it's compact. So it can always take R and compactify by one point. I mean, that's just mathematical proof that I can do. So that's, that's not a, that's not the issue. Okay. So the point is that if you have a system like this, and you have a non-autonomous input, then I would say many of the phenomena could be found already in these classical oscillatory systems that people have forced. So my question now is, in what sentence have you compared all these phenomena that you have in science to phenomena that have been found in these classical literature on forced oscillate? And there's a big literature of 30, 40 years on forced oscillators. How much is this really different from what's done here? Yeah. So, so first of all, now a couple of comments. If we look at inputs that die out at positive negative infinity, it cannot be periodic. Okay. To start with, we agree on this one. Okay. So you could have some, you could have something which is per, which kind of, you can get periodic signal and kill it at positive negative infinity. So you have packets of oscillations. So I don't see, no, I, I'm aware of the work on forced oscillators. And you know, we, we've done a lot of work on this in laser systems as well, right? But I don't see how I would be comparing the two and why, because that's, that's a, that's a very different sort of setting. Most time domains are essentially compact one-dimensional spaces, right? So they can be compared in that sense. I mean, if you have something that is periodic, it leaves automatically on the surface. Oh, this I agree. Yes. So that's, that's a nice compact space. And here you also built a nice contact space. Yeah. So in that sense, you could say, okay, you have a non-informal system where you have something where one component lives on a nice complex space. And you have interesting dynamical phenomena in both of these sort of classes of systems. So you could try to start comparing it. Yeah. So that's, so there we, there we go. The main difference is this. If you have a product for a system, compactification is almost kind of automatical. Okay. And then you, you have step one and step two taken care of. You don't have step three at all. You don't have to worry about your compactified system to be smooth. Okay. So first of all here, it is not always possible to compactify in the sense that you obtain a C1 smooth compactified system. Okay. That's what I'm saying. That's not always possible. And a lot of examples that will not be possible. Okay. So that, that, that's one difference. The other difference is that if you have a periodically forced system, the whole problem boils down to a simple classical question of bifurcations of periodic orbits or limit cycles. Okay. And this is, this is basically the question in the, in the periodically forced system. Now here, yes. I would say more because even in a periodically forced system, you could look on finite time scales. There's nobody imposing you in a, in a forced oscillate and a classical forced line up. I'm not going to say, I only want to look at periodic stuff. You could start somewhere in phase space. Sure. We would ask, where do I go after a certain thing's fine. That's right. But then, yeah, but these things have not been defined and you, you, I'm not sure what you would be, what's the, what's the mathematical question? So kinomics and heterokinetic connections would be found in these systems already and also different types of bisability mechanisms would be found on the finite time scales and success points. So, what's? So, Chris, in the moment you, you, you say heterokinetic connection, you cannot be at the same time talking about finite time, because heterokinetic connection has nothing to do with finite time. So, your connections are defined for, you go into positive and negative infinity, so that I'm not quite sure what you're asking, but what I can tell you is that from, say, from my perspective, okay, probably before a system you ask questions about bifurcations of limit cycles or periodic orbits, you could, you could definitely have questions about finite time, but I wouldn't know how to define it. Here, the way I went around it is it's a finite time disturbance question, but we sort of make it into infinite time, okay, for all practical reasons, you know, it's a very good approximation and we have this extra, extra, you know, this situation that allows us to define this article because of heterokinetic connection, okay, and from that perspective it's a bit different because you turn something which is non-autonomous into a heterokinetic connection which is studied in autonomous systems, but it would be different. Of course, there's heterokinetic connections between limit cycles and so on and so forth, but I still don't see a direct, so now maybe you see it better, I don't see a direct analogy. Now, I take a point about compactification and so on, but that's true, but you know, finite time for now, the finite time questions which are the transient questions are very, very complicated. The way I overcome the question here was I turn the finite time situation into an infinite time situation in the sense. I think we need to make time for other questions and maybe continue over one shot on this one because it's an interesting question that Chris gave. This is probably, I probably didn't quite do this, I'm sure that it's in there somewhere. Intuitively thinking about the ecology of it, there should be some relation between time scales as defined by something like R itself. How fast the dynamics are happening and double R, which is if I'm understanding it correctly, how fast the dynamics are changing. So the intuition is if your dynamics of your populations are really fast and your change is really slow, maybe they can adapt. So maybe the question is, is it possible to measure what the critical ratio of time scales is? It's a very good question, actually I didn't explain that very well, but you have different times. There's a natural time scale of the system of how fast you equilibrate towards the equilibrium and you have the time scale of the input and the answer is you mustn't compare the two. This is opposite of the first. But what you should be comparing is the speed of this equilibrium in the phase space with the rate of change of the input. So if I write dE over dt, so E now is a moving equilibrium varies over time, you will see that factorizes just by a simple chain rule. So this would be a function of lambda of t into dE over d lambda times d lambda over dt. So this is the key thing. Sometimes what happens, we'll just have the paper in Journal of Theoretical Biology about the situation where the input varies much slower than the natural times, what about the system tips? Now what happens is this, the position of the equilibrium is very sensitive on the parameter. So this factor is very, very large and you can have a very quickly moving equilibrium in the phase space because of this sensitive dependence on the parameter and a very, very slow change of the external input. So the intuition, I believe the answer to your question, the intuition is how fast is the stable object moving in the phase space? So this is for one measure to put norms here for speeds because this could be vectors. So this sounds like it connects a little bit to the first talk where if you have a strong parameter sensitivity in your system, I see you have a little change in the parameter, but it's changed a lot. If you're in that sensitive situation, and this is a phenomenon that you should look out for specifically. This is one reason why you went to the trouble here and you know the perturbations for the stable states, those formulas, because they tell you know how sensitive those positions depend on the parameter, how sensitive they are. And that's an important thing. Okay, thank you. Good answer, would you like to take some questions? Yeah, I think it's related to that one. So in the figures where you had red and green, so you had the return zones and the return zones and that, yeah. Don't those boundaries depend on how fast things are happening? Yeah, so I'm going to go into slowly. So this is the magnitude of the shift and this is the rate of the shift on the logarithmic scale. So you're right, they depend, but when you fix the rate, then you only get one critical magnitude. When you fix the magnitude, you get a critical rate, but when you use the tool, this is the dependence on the rate, the rate per zone. So in order to get return, is that because the parameter changed back at the same rate? Yeah, so it changes back at the same rate. Okay, so to have a return, if you go there, you can see that above you always say, because if you go very fast back and forth, the system doesn't have the inertia to respond. If you go very slowly, it will always track. Okay, but then there's the intermediate region, which is a bit like a resonance phenomenon, the system has the sensitive rates. Okay, cannot be too fast, cannot be too slow, and when you hit the sweet spot, know where the system doesn't like to be, it will, so that goes back to your question, right? When you, for example, go back and forth, close to the resonance frequency, then you'll have very much the kind of response. The interesting thing is here, because you, if you take this one, you tip, then you track, tip, track again, and then you can go back and forth. It's actually non-trivial. Some of these diagrams we get is quite crazy. We don't entirely understand. So one way to think about this is the sort of comparison to the resonance. So there's an optimal timing where the system kind of responds more, degrades more from the statement state. Yeah, so that's, that's, yeah. Last question. Yeah, so every normal autonomous system can be understood as part of a bigger autonomous system, right? And that is mostly philosophical, right? But nowadays you think about some very big autonomous systems that are hard to understand. Now you have made progress on the study of non-autonomous systems. Can I use this and take my large autonomous system, cut it to two halves, apply it to both sides, and kind of get it, say, consistency conditions? Yeah, exactly. That's a very good point. So it just depends, you know, very right, it's a philosophical question in the sense, or it's a very good philosophical question, that what is your universe? If this describes the universe, the system is closed. Okay, so there's no external inputs, right? But what I do, I always have to take a very small system in physics, ecology, climate, and this is an open system, which is a subset of the universe, which is subject to all those other processes. The other processes are some other non-autonomous or autonomous systems, right, that give you this. So you could try to write a set of one million autonomous equations that give you this, or one thousand, but it may be very, very difficult, practically impossible. But what we've done is that we just added, so you have this f of x lambda of t, okay, here, you could add a lot of other equations, which is in fact, we just added one extra equation. Okay, so the thing is that even if your input is highly non-monotone, you can still compactify with just one dimensional addition. That's an important point. Can I use this as a tool, actually? So suppose I have my food waste, which is really connected because I'm just in the water, and how about it is on land, and then they say, instead of regarding the one non-autonomous system, I just want an autonomous system, because it's a non-autonomous system, it's on land, it just gets an answer from the water. So remember, you have this one assumption that you wanted to die out at infinities, otherwise it will not work. So if you ask about finite time disturbance in an approximate, then yes, it will be possible. We can talk about it more, but you have to keep that in mind that we have this restriction of finite time approximation in the science. Okay, that's it. Thank you. Thank you.
|
Many systems from the natural world have to adapt to continuously changing external conditions. Some systems have dangerous levels of external conditions, defined by catastrophic bifurcations, above which they undergo a critical transition (B-tipping) to a different state; e.g. forest-desert transitions. Other systems can be very sensitive to how fast the external conditions change and have dangerous rates - they undergo an unexpected critical transition (R-tipping) if the external conditions change slowly but faster than some critical rate; e.g. critical rates of climatic changes. R-tipping is a genuine non-autonomous instability which captures ``failure to adapt to changing environments" [1,2]. However, it cannot be described by classical bifurcations and requires an alternative mathematical framework. In the first part of the talk, we demonstrate the nonlinear phenomenon of R-tipping in a simple ecosystem model where environmental changes are represented by time-varying parameters [Scheffer et al. Ecosystems 11 2008]. We define R-tipping as a critical transition from the herbivore-dominating equilibrium to the plant-only equilibrium, triggered by a smooth parameter shift [1]. We then show how to complement classical bifurcation diagrams with information on nonautonomous R-tipping that cannot be captured by the classical bifurcation analysis. We produce tipping diagrams in the plane of the magnitude and `rate’ of a parameter shift to uncover nontrivial R-tipping phenomena. In the second part of the talk, we develop a general framework for R-tipping based on thresholds, edge states and a suitable compactification of the nonautonomous system. This allows us to define R-tipping in terms of connecting heteroclinic orbits in the compactified system, which greatly simplifies the analysis. We explain the key concept of threshold instability and give rigorous testable criteria for R-tipping in arbitrary dimensions. References: [1] PE O'Keeffe and S Wieczorek,'Tipping phenomena and points of no return in ecosystems: beyond classical bifurcations', arXiv preprint arXiv:1902.01796 [2] A Vanselow, S Wieczorek, U Feudel, 'When very slow is too fast: Collapse of a predator-prey system' Journal of Theoretical Biology (2019).
|
10.5446/57604 (DOI)
|
And I am going to talk about some new or at least sort of new typology mathematical methods, but I really want to focus this talk on the questions that are sort of broadly open questions that we're now able to start attacking because of these methods. So I'll give an overview of the methods, but I'm not going to give a really deep methods talk and there's a couple of reasons for that. One is that, you know, that's not really my expertise, what I'm going to show you is collaborative work and I'll acknowledge my collaborators as I go, but also, you know, it's after dinner, many of you are pretty jet lagged, so I'm going to try to keep things sort of, I got a phone sound from somebody who's got a phone on the table. Yeah, and I know this is probably pretty painful for those of you that flew here from Europe, so if you fall asleep, I won't be offended. I'll tell you about this later over a beard that you buy me to make it up for me and we'll call it a go. Okay, so open ecological questions that we can answer when we think carefully about the best of us to see. In ecology, probably everyone in this room knows this, certainly all the ecologists know this, we sort of have a tradition versus reality tension. The tradition is to treat dynamical systems like ecological dynamics, like a gear system with a crank. So we first assemble the gears, that means defining functions that describe things like density dependent species interactions. We feed in initial conditions, we turn the crank and we get an equilibrium out of the other side. This is a really powerful method, but there is, as Andrew said, when we were discussing what the topic of this talk should be, an elephant in the room. And that elephant is the fact that this is non-equilibrium dynamics, it's a little bit small, stamped on his butt. A lot of ecological systems aren't in equilibrium, so we have this really powerful method. It gets us a lot of really great insights, but it's obviously missing, as Alan talked about this morning, a lot of what's going on in ecology. So ecological systems are often not in equilibrium, some are probably never at equilibrium. There are a number of reasons. The first is just sort of what we think about as stochasticity. I describe that sometimes to people that aren't as familiar with this as sort of jostling of the system. It could be big jostling if we're talking about high-intensity stochasticity, but sort of background continual disturbances, so this could be demographic stochasticity or environmental stochasticity. And when we have something like that acting in a system, instead of having, say, if we're looking at population dynamics, instead of sitting at our hypothetical equilibrium, we'll never actually settle down on that. This is the Rosenzweig-Rekarther Predator Pre-Model that shows stamped oscillations in the deterministic setting, so in a stochastic setting it's going to show noisy, but oscillation is forever. Large disturbances can also cause systems to be away from equilibrium, so and these could be stochastic or not, and my theme for today is stochasticity. But whether or not these are stochastic, and we're talking about sort of sudden distractions to the normal processes, they could be biotically biotic due to us or not, and this is the situation where you're bumped off a system that's kind of hanging out here. This equilibrium is bumped off, and as long as the recovery time is relatively slow compared to the frequency of these disturbances, then we're going to see a lot of time away from equilibrium in a system like this that's prone to disturbances. And environmental change is similar to disturbances, but instead of moving the system, you move the equilibrium. So this is a situation, again, where if we start out near equilibrium, but there's suddenly a change that moves that equilibrium away from where the current system is, again, kind of no take time. These are the transient dynamics, non-equilibrium dynamics that we are likely to see. So all sorts of reasons that we need to know more about a system than simply its equilibrium behavior. Now, as if there was an elephant in the room, I'm going to just talk about a baby elephant today, and that is environmental stochasticity. So I'm going to kind of focus on that throughout the rest of this talk. And what I'm going to do from here on is give you kind of a couple of different sort of relatively independent vignettes, their common thread is sort of dealing with how do we deal with things that are non-equilibrium, and specifically for these, for this talk, things that are non-equilibrium because of environmental stochasticity. So ecological potentials have already come up today. I think, you know, everybody here is probably familiar with them. If not, the very quick cliff-stose version is this is where we envision the dynamics of a system as a ball rolling on a landscape, and the potential is a function of the state variable here, population size, drawing everything in 1D for as long as I can get away with. In ecology, we're more used to thinking about the rate of change in the population as a function of population size, and you can translate between these two things, according to the equation up in the corner. And sort of whichever way you look at the system, we can identify stable equilibria as points where there is zero net growth and a negative slope of the linear approximation of the system, or a negative dominant eigenvalue of the Jacobian, or the trough, the well of a potential. And we can identify unstable equilibria as a peak or, you know, a point with, again, zero net growth, but positive slope. And the dynamics, you know, the true ecological dynamics that we're looking at are just the time course of sort of the ball rolling on a landscape. So this is the formalism that we like to talk about a lot in ecology as an analogy, if there's this bell rolling on this landscape. And suddenly, you know, that makes more sense than something like this picture to a lot of people, both students, but also, you know, practicing ecologists that don't necessarily do a lot of theoretical work. They're like, yes, ball rolling on a landscape. I live in a physical world. I understand that, right? So it's a really nice formalism. And we can use this formalism to think about the role of stochasticity because it gives us really the global dynamics. If we have this whole function, we know what the dynamics are going to look like. If we just do linear stability analysis, we have, like, local information about what's happening near equilibrium, which is sort of a standard approach in ecology. But if we have the, we're looking at this whole potential, this whole landscape, then we know what's going to happen, sort of, no matter where we go. And that becomes important when we're talking about, maybe not extra, extra large noise, like we talked about in our breakouts this afternoon, but large noise where we're unlikely to maybe just stay, like, right here, and we're maybe more likely to go all over the place. So then we know what all over the place looks like and we can make inference. Yeah, you know. I mean, that's wrong, right? I mean, that's a picture so wrong because, I mean, if that was the case, there would be no oscillations or anything. What do you mean the picture is wrong? It's just not the potential of, this is just a drawing of a potential, it's not a particular system. Yeah, no, I mean, the idea of the potential, this is a potential ecology. Okay, well, hold that thought. That's actually a good setup. I usually don't like it when somebody raises their hand a few minutes ago and I'm talking to a fan, wrong, but that's an awesome setup. I'm glad you said that. So hold that thought. Let's come back to it if you're not, sort of, swayed by what I say. But that's, yeah, yeah, yeah. Okay, so if your skepticism is good, we have skepticism in the room that this is useful for ecology. That's fantastic, actually. This is what led us. So this is not the new mathematical method. This is the old mathematical method. But I'm explaining what it is because not everybody in this room is a mathematician and I want to make sure everybody's on board. So this is kind of what we're talking about, while we're rolling on a landscape. So, okay, so why would we want this landscape? We're suspending disbelief for the moment that we have such a landscape for every ecological system. Suppose we have a landscape, why might we want it? For lots of different reasons. And I'm going to talk about using the potential landscape to measure stability, just as an example of something useful we can do with it. Many authors, Tony Ives and Fulcacrim and others have written about the sort of quagmire of stability in ecology in that we use the word stability to mean a lot of different things. There are a lot of properties of an ecological system that we might say, you know, use the term stable or unstable with. We're not always talking about the same thing. If we have a potential, we can sort of start to map out some of those things and start to make this more explicit. So, when we measure the dominant eigenvalue of the Chocobian matrix and that linear approximation of your equilibrium, where we're measuring is the return rate toward the equilibrium if you're perturbed very locally. And what that gives you is in the terms of this drawing is the curvature of the potential right here equilibrium. And so we might say, or we would say that if this is our measure of stability that a more concave context, whatever, that one is more stable and something that's flatter at the bottom would be less stable because if, you know, the ball rolls away, it's not going to roll back down as quickly. But there are lots of other things we could mean by stability. A lot of times in the literature, at least not necessarily the quantitative theoretical literature, but a lot of the applied ecological literature just talks about stability in terms of avoiding extinction. So putting that into this picture, then something that's steeper near zero would be more stable than something that's flatter near zero where you're not going to, if you happen to get near zero, you're not going to get away as quickly. We can also talk about the width of the basin of attraction as a measure of stability. So then wider is more stable. The depth deeper is more stable. It's going to take more sort of work for whatever is working against this potential to march you up the hill. If that's stochasticity, you know, more big perturbations or maybe auto-correlations that get you up a bigger hill versus a smaller hill. And then of course we could look at stability in terms of the just are you going to return to the equilibrium you left and something with few equilibriums that are more stable than those with many. So just to give you an idea of like sort of what kinds of things we can do with this, with this surface, not just a description of the dynamics, it's a way to sort of start making sense of a lot of the things we talk about in ecology. But when can we derive a potential function, right? When can we actually have this kind of surface? So this is the question that came up right from the start. So if we have a one-dimensional system, one single species population model, you can do it. If we have multiple species, we can do it sometimes and actually not very often. So say we have a predator prey population size. The height here is like the height of that potential surface, although I took away the label because I'm about to tell you that like it's not awesome potential. So I didn't know what to call it. But anyway, you can think of it as the height of that landscape that the ball is rolling on. And if we happen to have the kind of system where the ball is just going to roll straight downhill, dynamics that look kind of like this, then yes, we can derive a potential function. But as Thela already said, if we have the kind of system where there is circulation on the way down to this is still a stable state, then we can't derive a potential function that has that definition of sort of what we were looking at. Because there's no longer a function that scales with height that gives a full description of the dynamics. There isn't going to be a function u that satisfies that dudn equals minus dndt. Because dndt has the circulatory thing and there's no information in the height of the surface about that circulation. And the kind of bummer for ecology is that consumer research oscillations are everywhere. Everybody eats something, everybody gets eaten, and consumer research systems are very prone to oscillation. So very often in ecology we're in this situation where we can't just write down the potential. So that's why I said that ecologists like to use the potential landscape as an analogy. Let's imagine a ball falling on a landscape. Now let's talk about real dynamics. So I think this is where that question came from when we come right. Yeah. Okay. So what we can derive, I mean a number of things. The one I'm going to talk about is the idea of using the quasi-potential. Now this is something developed by Friedland and Fensel and then taken by my thorough postdoc by Nolting and sort of brought into an ecological context where we can start to talk about some of these ecological questions we were interested in measuring stability and different measures of stability. And so that's the work that Ben did. So the quasi-potential, what it is, is it's a surface. We compute it numerically. And there's an R package that will do this. Sometimes. You know. It's an R package written by ecologists, not programmers, but you know. So the numerics are available, freely available. I'm happy to talk to people through about how to kind of get a hold of that if you want to play with it. But anyway, what the quasi-potential is, is it's exactly the potential when the potential exists. So if you have a system that has a potential and you do this numerical thing, you'll get the potential. Right? So they're the same thing when the potential exists. When the potential doesn't exist, what it is, is it's an analogous landscaping that describes the downhill behavior, meaning the tendency to roll toward stable equilibrium points. So and what, I'm going to hold that for a second. So this is an example of, and I'll show you a few more examples. This is the actual quasi-potential surface for a two species model. This is a contour of protection. And this is a vector field showing the dynamics are in this square for this lower basin. In the height of the quasi-potential is the downhill part. And so if we imagine a ball rolling on this surface, if I drop it inside the shape, but right here, it'll roll down the steep part first, and it'll roll down to the stable equilibrium point. And that is captured in the height of the quasi-potential. And this is the stable equilibrium point. What the quasi-potential gives you is an orthogonal vector field that is called the remainder field, and that gives you any remaining dynamics, which would be circulation at constant elevation on this surface. So you have the purely downhill component, and the purely circulatory component, and those two added together give you the full dynamics. So this is not a true potential in that I have to show you this vector field for you to know the full dynamics. If I just show you the shape and you think about a ball rolling downhill, you're missing the circulation. But it still is useful, the way a potential is useful, because it tells us in the long term where you're going. So it allows us to get to a lot of those things that I showed you are useful to point out on a potential. We can get it from the quasi-potential. And then we just know that there are, it's not a full description of the dynamics, but it's a nice representation. It's a full representation that we have that vector field. The visual surface is not a full representation. So to come back to this idea of measuring stability, just to give you kind of a worked example, this is a consumer resource model with a type three function response. The quasi-potential surface looks like this. They're two equilibrium labeling them A and B, so I can refer to them. And then actually, because I don't like drawing in 3D, I'm just going to take a slice through the state space to look sort of at the bottom most part of that shape. So if we were to measure stability in this quasi-potential using the dominant eigenvalue of the tricobian matrix, giving you the return rate to equilibrium, or more precisely, you're applying the negative return rate to equilibrium because we want low to be unstable and high, less stable and high to be, sorry, not unstable, they're both stable. But low is less stable once we minus it in. And high is more stable. And according to this measure, equilibrium B is much more stable because it's more curved at the bottom. So a little ball, if you're near that equilibrium, a little bit of noise will roll back more quickly than if you're near this equilibrium and you have a little bit of noise. And if we're thinking about stability uncarifully, not that colleges are ever uncarable about quantitative ideas, if somebody says, how do you measure stability, you quickly calculate a dominant eigenvalue. So this is kind of like what we would do without thinking about it. Remember the title of my talk was, if we think more carefully. If we instead use one of the other measures of stability, say for example, this basin depth idea, which I've called the ability to withstand major change or how much noise do you need to push you out of this basin, we actually see, of course, I mean, you can see it right here, this is the measure A is much, much deeper. So it would be much more stable. So we have two different measures of stability, in one case less, in one case A is less, in one case A is more. What do we see, well, I'm just going to show you one realization of this model. I'm going to plot the population sizes against time. This is equilibrium A for the consumer and the resource, I'm going to use them on the same, put them on the same axis as equilibrium B. And we're in fact, of course, spending much more time in that deeper basin. And so we see that if we actually use the whole quasi-potential, we can get a much more informative measure of stability, at least in terms of where you expect the system to be most of the time. So that's kind of the new tool. I wanted to show you one more example. This is also a consumer resource model, just with a type two functional response now. I just wanted to show you that we can calculate the surface for non-point equilibria. So this is the limit cycle. And the downhill component is sort of convergence from big cycles up higher on the shape to this cycle down in the bottom. And then the long-term dynamics here are circulation around the bottom. So okay, I probably should have been emphasizing along the way that I started talking about stochasticity and I started talking about prussian potentials and I maybe sort of skipped a thought. So now that you know what a prussian potential is, what this is gives us the ability to do. I said this a little before when I was talking about potentials. It gives us a global view of the surface instead of this very localized linear stability analysis is a perturbation analysis. So in some sense it takes into account the idea that you're going to be perturbed off equilibrium but it imagines very local perturbations. Now we have this surface that gives sort of the global description of the deterministic skeleton and we can imagine a ball being jostled as it rolls around this and we have sort of this full description and the jostled ball. Where is it going to tend to go? We can get from this surface. Yes. So you don't multiply the w1 and w2 by r and c? No, this is additive noise here. You can deal with multiplicative noise by this kind of approach by doing a change of variables but that's not what we've done in this example. What would that give you negative values? I mean if you run that SDE it's going to go negative. Probably one. Yes. Okay. Yeah. Right. And actually that was a well-timed question for the thing that I just realized I kind of forgot to emphasize which is like how does SSE factor in, right? So I talked about this when I was showing the potential but then I didn't come back to it when I was showing the quasi-potential. This surface comes from the deterministic skeleton so we're fine. The surface is well behaved. We're imagining a ball that's being jostled around the surface and as written we're not preventing it from being jostled into negative. But that's sort of... That's the problem for studying the dynamics themselves not necessarily a problem for developing the tool that allows us to study the dynamics. But yes additive noise is the simplest way to calculate these structures, these surfaces for multiplicative noise. You have to do a change of variables which yeah well I'm not going to go into that but yes the good observation I'm glad you brought that up. Okay so to sort of summarize this first bit. I've only got five minutes left. The second bit is in fact shorter. The quasi-potential is our useful tool for studying non-equilibrium dynamics to sort of map it back to the reasons I gave you at the beginning for why we are not at equilibrium a lot of the time. If we're in this situation of continual stochastic jostling like we were just talking about we can consider the dynamics of the system as like a jiggly ball moving around the surface. If we're in this situation where we have a large disturbance then you're looking at kind of tossing the ball across this landscape and looking at what happens based on where it lands. If we're talking about an environmental change or some sort of change to the system itself that the dynamics are then going to attempt to track then we're looking at a change in the shape of the quasi-potential itself. So if we start here and there's an environmental change this is an example where the environmental change crossed by fracation. This surface again will tell us how to get from point A to point B. So there's a nice correspondence between this as a tool for looking at various kinds of causes for non-equilibrium diodes. So to come back to what I told you I was going to focus on which is the open questions that are now coming in to reach. This is not an excessive list. This is sort of a brainstorm I made when I was writing my slides but essentially what does my system likely do when it's not in equilibrium is now something that we have a nice tool to use to answer how stable is this ecological state. That's the toy example question that I've walked you through. And the nice thing is stables and quotation marks here because it means a zillion different things but we can measure a zillion different things. We can be explicit about what we mean by stability and then we can measure what we have defined that we mean. We can talk about the expected frequency of shifting between alternative stable states. You can take the base in depth and use it to get it's a small noise approximation according to our simulations it seems to work pretty well in the case of large noise. So there's a formula you can use to look at mean first passage times between basins and what states should I expect to see my system most of the time, you know, etc. Okay, in my short time remaining I want to just give, this is actually the reason for it's brief is that it's work that we haven't really done yet. But I want to talk a little bit about stochastic networks. This is collaborative work that I'm doing with Peter Thomas who's at my home institution in the Mount Department. And this collaboration came about because I've been on a soapbox for at least 10 years saying when we ignore stochasticity we get all sorts of things wrong. And the sorts of things I have been thinking of was inferring system size from time series data considering how dispersal effects, synchrony instability, yada, yada, yada, yada, it's all wrong unless we like think about stochasticity. So this was like my shtick. And then a few years ago I saw Peter's postdoc, Dina Schmidt give a talk where she illustrated really nicely how you can ignore most of the stochasticity in the system, in a stochastic network specifically is what she was working around and still get things mostly right. And so she was showing that if you get the correct mean that didn't surprise me necessarily but also a good estimate of the variance of dynamics in a particular noted in network while ignoring most of the stochasticity. And I looked at the contrast between sort of my message and Peter and Dina's message and I sort of thought, okay Peter and I will sit down and figure this out. So we're just starting to sit down and figure this out. The way Dina's theory sort of works is if you have a fully stochastic network. So I look at a network and I think a meta population or food web, Peter and Dina look at a network. They've mainly collaborated with neuroscientists in the past and they think of ion channel states, whatever it is, you have sort of probability mass particles moving around this network. And suppose you're interested in the dynamics at some focal node. I would label this two here. If we think of each of these fluxes as stochastic and of course depends on how you define the distribution of that flux but suppose you can stick a mean and a variance on each of these and you're happy with that then to actually represent, I didn't do it, this model you would need, there are 40 unidirectional fluxes so with the mean and variance on each that's 80 parameters. What ecologists usually do and I think neuroscientists do although I'm not sure about that is to just forget about the variance. So this would be symbols are let's just use the mean field and call it a deterministic network that would require 40 parameters. What Dina and Peter did was figure out the minimally stochastic model essentially using mean field almost everywhere but identifying what they call the important edges. I'll tell you what that is on the next slide and only including variance on those edges that in this example would give us 46 parameters and still give us very good representation of the stochastic dynamics at this node. I recognize that I'm out of time so but this is I think the last slide I can show you. The definition of edge importance is based on the idea that you can take the variance at any particular node and partition it into contributions from all of the fluxes in the network. So R is what they call the edge importance measure even if I wasn't running out of time I was not going to walk you through it but that's what it is if you care. And I don't know you don't care about the equation but I thought you might want to see what goes in it. But anyway the point is if you treat any of these transitions as deterministic then you're removing that R term from this variance calculation. But the point that they found is that in most networks only a few nodes are important sorry this is a typo that should only have a few fluxes are important so you can get actually really good it will be an underestimate but a really good estimate of the variance even by modeling most of the nodes as deterministic. So they were interested in this as a computational shortcut so you don't have to use as many random number streams in these neural network models that have lots and lots and lots of nodes and it's useful for that. But it is also I think really useful in ecology and I'm just going to skip I have one worked example here that I'm going to skip over and just get back to my list of questions. It also allows us to ask new questions in ecology like what's the most computationally efficient way to simulate dynamics in a population of interest but also given a population of interest what should we look at to get the best models for the stochastic dynamics in that population. We're lucky in ecology and the neuroscientists are unlucky and that we often have a choice of what nodes we call the node of interest. So in a meta population we can choose which subpopulation to go monitor or on a food web we can maybe choose maybe not amongst all the species but amongst many of the species which one we want to look at. In the neural networks the different nodes are ion channel states and you can only observe the one that causes the neuron to fire. They don't get to choose their focal node but we do so then we can start looking at their particular properties of a node high connectedness, low connectedness I don't know different types of high rates low rates whatever you get the point but can we look at sort of properties of the nodes themselves that are the best represented by this minimally state. You can derive a minimally stochastic network for any of them but sort of which are the best is there any sort of system there. So that's pretty exciting I think to me a new question in ecology and then you know how well do these minimally stochastic models perform in real life. You've got a graduate student and a new postdoc working on this the graduate student is a math student doing more of the theory development and the new postdoc about to start in my lab is going to be working on some case studies to kind of get a sense of like how good is this in real life. So that is it I've included all of my references here because I skipped over all of the sort of technical details so this is where to look if you're interested in seeing more and I'm happy to take questions and I think Erin is still awake and I'm really thank you. You can buy my own beer. Thank you. So, you love? So, okay there's some things that I would say dislike philosophically right and let me defend myself as a strong proponent of what you could let me know. So you said the world is not an equilibrium. I say you cannot make a statement because being an equilibrium either you're a physicist and you use an equilibrium in one way. Yeah. If you use it in a mathematical way you cannot say it because equilibrium is a property of a model. It's not the property of the world and the world we don't know even how that is defined. So I have one system I model it in two different ways and one model is an equilibrium my other model is not an equilibrium. Absolutely. So if you are frustrated with the models where we have a low number of differential equations and then we only study the steady states of equations that we believe to be population densities as well then yeah I'm totally frustrated with this as well. But for me the problem is not that equilibrium is a way of modeling. Yeah we cannot say we want to keep this as models where it's a variable of population densities and maybe go to non-equilibrium things. Alternatively we can say we go to bigger models with the variables of something else. Yeah. But we retain the possibility to be in equilibrium and by the way you approach falls into that because if you say how much of the time we are in that state and how much of the time we are in that state right there is of course again an equilibrium. Yeah. It's not an equilibrium point. Yeah yeah and I mean it's an equilibrium probability distribution. It's an equilibrium probability distribution instead of the steady state in the higher than the constant. Yeah absolutely. That's I actually I fully agree with everything you said. So what is the problem is you get a equilibrium. No no no and it's not a condemnation it's a problem with the status quo in ecological theory and there are multiple solutions and I think I've outlined sort of a low dimensional way forward and you've done a really nice job of describing a higher dimensional way forward and I don't have a yeah I have no sort of horse in the game I just have the ones I've worked on. The reason why I love the equilibrium is the equilibrium is a philosophical construct right. The equilibrium is not a thing that attributes to the world. The equilibrium is a property of a model. It's an idealization that we make right. Yeah. The equilibrium is our frictionless point mass. It is our idealization. Yeah. And now if you're if you're saying okay you can go to this different framework and I think the one reason why I don't like the classic potential is okay you can fix the problem as a potential in 2D yeah but you can't fix it in 3D. No you can. You can still compute this you can't look at a nice figure. You can still compute the surface and calculate the relative height differences. And in 3D you can have non-integral dynamics. They can protect the can fundamentally and mathematically improve in a proven way. They can do nothing like a classic potential. That's bad. You just take this up maybe. No no no no. You can still. You can still compute. I mean we can't. Our Q-plot can't. Our package can't but that's not. How do you attract it? How can you have it? Let's see. That's a question. That's a question. The question is how can you not easily come with us? Because you wrote down a couple of stochastic differential equations. Yeah. So there's a PD that my Christians take us right down to both the point equations. Yeah. In this case there's no way to get solved with a stochastic equation. So if you get a description that corresponds to something like probability density equation. Yeah. So if you talk about the equilibrium of the focus function. Then you're talking about a need to know the probability density function. Yes. Now at that level you're going to be able to calculate how often are you here? Absolutely. How often are you there? So that's a different equilibrium than the one we're thinking of the ODUs. Right. Right. And that's really interesting. I'm glad you kind of put in those terms because one thing we thought a lot about with this, you know, can you take the quasi potential flip it over and like how much does that correspond to the stationary probability density? It should, right? Equalityively. How wide those peaks are quantitatively is a function of the noise. And the nice thing about these quasi potentials with additive noise is that it's not a function of the intensity. It's a function of the nonlinearities in the system and you get the same quasi potential regardless of the noise intensity. The noise intensity just changes your sort of path over the surface. But we've been thinking a lot about the relationship between the stationary density probability distribution and the quasi potential because if we ever want to do something like estimate a quasi potential from data, we're going to need to understand that correspondence. So I can't say we've made great progress but I think that's a, yeah, what you've said is very, very relevant. Yeah. That's a good point. So yeah. So, I mean, I'm not, I'm just a criterion and I like, I like a potential as much as anyone. But I'm still a bit confused about the definition. I mean, it's kept at all, where is that? So you have a nonlinear function multiple variables, so you represent it as the virgin less part and that's your potential and then whatever remains is there and is the unique representation, you can always do that. So what's, what's, what's, what's, what's, so again, I'm not, but you mentioned here that you're kind of wanting to get, you want more, yeah. But then on that kind of, pretty mad at the end, I do a lot of, not, not for dynamic for my chemical, but for the people like Jim Vann in, in study group, been trying to do the same thing like you do for chemical reaction networks and there, what he does, he defines the potential as log of P. So basically thinking of Boltzmann distribution or the potential and P would be a distribution of the torque of planning equation that was just mentioned and again, whatever remains is there, is the circle of the required and stuff. So that, that, that, that being said, it's, I mean, that didn't really go too far because the noise is, as you, unlike the terminal noise for the sort of Boltzmann distribution, which is constant kind of everywhere in all the function of temperature, noise and chemical reaction, I'm sure in ecological dynamics, is a function of M and also a lot of there would be correlations, and you have multi-dimensional systems, there would be correlations between noise and different dimensions, because some of the noise comes from seasonal variation, which are practically created, they're in the same way or the opposite way or something like that. So I'm, I'm just trying to get some of those philosophical questions. If we solve those complications, what, you, you, you find the potential, what's the use of it? So I'm gonna backpedal from the deep philosophical question for a second and address what I think you were getting in the beginning, which is, I mean, there's, there are, there are different ways to decompose a system into a sort of downhill part and a certain way part, right? And I think that's kind of, that's what you're getting at, right? There are multiple ways and the reason we like this way in particular is that the quasi-potential, the, the downhill gradient of the quasi-potential is always a Lyapunov function of, of your system. And so if you do one of the other decompositions, your base, the location of the deepest, or you know, the, the wells, location of the wells isn't necessarily gonna line up with the point of equilibrium in the deterministic part of the model. In the quasi-potential, it does always correspond and we really, really like that for sort of ease of interpretation. It really makes this much closer to our use of potentials as a sort of heuristic tool or a, or a, you know, sort of qualitative analogy. And so that's why we picked this one. It's not the only one. And actually Alex Strang, the same student I just showed a picture of on the previous slide, has also been kind of comparing different, different decompositions and work in progress. Yeah. The deep philosophical question, maybe I'll pretend like I forgot you asked it and we can talk about it later. Is that okay? I mean, it's a good question. I don't know how anything particularly intelligent to say about it right at the second. So you need to check on this. It's a simple question. If you find out what species is manageable to calculate as a question, you are an athlete, what do you do? How do you calculate all these things as a question? Yeah. There is, I mean, I think there's definitely an upper limit and I don't know what the number of dimensions is, but there's an upper limit to where this is the useful tool, both in a practical sense, like can you actually get the numerics to work, but also I think in a conceptual sense. So I think for very high dimensional systems, it's probably not the right approach. Yeah, actually, one of the most important, I think, that you have three or four variables is how we can think about it. Yeah. I think you can check is actually, you put your finger on something which lies in a lot of what we may think about this week, is how do we move from these very low dimensional systems to much higher dimensional systems? And that's one of the challenges of ecology is that often these systems, high dimensional connections are sort of very particular. The different actors are very different, so you can't do some of the kinds of averaging you might like to do. And how you really do that is, right, so. And I think it's at the heart of the quote that Axel read this morning, that community questions get, those are high dimensional questions, and yeah, how do we get there? It's a hard question. Maybe time for another one or two questions and given the hour, but I'm sure Karen would be happy to carry on. Well, maybe I shouldn't say that. So for a two-space system, if you just look at the vector field and the base of attraction between, for instance, two equilibria, would you get the same picture like the passive potential? So, for instance, if you would estimate somehow the area that corresponds to one of the two different equilibria, you know, based on the interactions between the sub Franklin Living and the DIH. But yeah, if you could ask season one, or what were you wondering, what environment different things are viewing you, what лучше do What would you make a building and what actions that webinars would take from aamethrary outcomes? Well, obviously the
|
Classical ecological theory relies heavily on the principles of deterministic dynamical systems, and methods from mathematics and physics that are more appropriate for stochastic systems are unfamiliar to many ecologists. As a result, when stochasticity plays an important role in shaping ecological dynamics — as it often does — our ability to fully address certain questions can be limited. In this talk, I will give an overview of some new (or at least newly extended for ecological applications) mathematical methods that bring important new classes of questions into reach.
|
10.5446/57605 (DOI)
|
So I will start with this quote. Some people have seen it often before. It's this quote from the British poet John Gay who wrote, Less men suspect your tale untrue, keep probability in view. And in this context, less biologists suspect your model untrue, keep probability in view. And an important probabilistic aspect of ecology that we've heard about already, especially in Karen's talk yesterday, is that there's environmental stochasticity. In other words, fluctuations in demographic rates, like survivorship, reproduction, growth, due to fluctuations in environmental factors like temperature, precipitation, resource availability. So the basic way it works is you fluctuate in an environmental factor like temperature. You have potentially a nonlinear response of your fitness to that temperature. And then that creates fluctuations in fitness. Now, the main question I look at today is sort of mathematical methods to address this question, which is, how does environmental stochasticity influence population persistence and the maintenance or disruption of diversity? So there's been a lot of theoretical work. I'm going to try to focus on some mathematical techniques for analyzing models with environmental stochasticity in a rigorous fashion. And the main reference is a recent JMB paper with my longtime collaborator, Michelle Benim, from Switzerland. So I'm going to start really simple, which is a geometric random walk. So you're keeping track of the scalar quantity, the density of the population at time p. Everyone in that population has the same fitness at any given time step. And I'm going to assume just for simplicity for the moment, those fitnesses are IID across time. So then you get the similar simple stochastic linear difference equation where the density in the current time step gets multiplied by the fitness. That gives you the next density. It's trivial to solve for solutions of that linear difference equation, just the iteration. And then if you want to understand what can you say about the growth of the population, you can just apply the law of large numbers by taking the log of the density, dividing by t, then that product, turns into a sum. And then with probability 1 under the right assumptions on the first moment on the log of the fitness function, you get the limit as t goes to infinity is given by the expected log of the fitness function at any point in time, such as time 0. And I'm going to call that throughout this talk the realized per capita growth rate of the population. So r will always mean that throughout the talk. Now, if r is negative, and we run a model where we're randomly sampling the fitnesses and do lots of simulations and plot them in a semi-log plot, we'll get something like this. So you'll see the tendencies to go to negative infinity and log densities, i.e. extinction. On the other hand, if r is positive, you get the opposite trend. So this idea of looking at the expected value of log fitness is not new. In fact, it's very old. It goes back to the work of Daniel Bernoulli in this treatise on economics that was printed in 1738. And in this treatise, you can find loosely translated the following quote, which is dealing not with populations of organisms, but populations of money. So in this case, the fitness terms are the returns on your money. And if you want to know in the long term how fast does your money grow, you take the terms, the fitnesses, multiply them together, and take a root corresponding to the number of terms, which is exactly what you do in the model I described before. If you have a finite number of fitness values that you're taking on with equal likelihood. And so then if you exponentiate that r, you get this geometric mean of fitness. And I'm sure everyone's familiar with. And a key property of the geometric mean is that the fitnesses have some variance to them. Is it strictly less than the arithmetic mean? And several hundred years later, in this classic paper by Lowenton Cohen, they made that observation. And they got a P&S paper out of it, which is that you can have the expected population size to grow infinitely large. But you're going asymptotically toward extinction in any realization due to the difference in geometric and arithmetic means of fitness. But these models, of course, that I've described so far, are way too simplistic and probably get chastised by Kenneth Holding for believing that could go on forever in a finite world. And certainly, I don't want to be called a madman. And given worse, perhaps an economist. Apologies to any economist out there. So of course, people have developed models where you allow for density dependent feedbacks, like negative density dependence. And there's various papers on this. I'm just citing two of them. But one of the key features of these results for these density dependent models is if you look at the expected value of the log fitness at zero density, that tells you whether you have persistence or extinction. And I'll say what that means. Persistence means exactly a little later. And of course, we made models that are more interesting because we wanted to diversify the objects that we're looking at within a population or populations. And one way of diversifying the set of types is to account for differences in age or in stage or spatial location. So then we have these structured population models. And even understanding their dynamics became a really important theme of stochastic demography. And there's this delightful book by Shira Parj Tuljapokar about summarizing what was known as of 1991 about these random matrix models. And a key fact of these random matrix models is that there is also a realized per capita growth rate. And that's due to some work of Ruel as well as others. And one of the things that's sort of amusing about Ruel's paper, which is in a, I've forgotten it's not the annals of mathematics. I think it's the American Journal of Mathematics, which is a very good math journal. He did the analytics of understanding that this realized per capita growth rate actually is analytic with respect to parameter variation. So you could write down a derivative, but he actually calculated the second derivative wrong in that paper. And Tuljap independently calculated it correctly. So this is a very famous small variance approximation of the realized per capita growth rate. And notice again, fluctuations, at least in IID environments, also reduces the growth rate of the population. And again, to avoid being called a madman or an economist, the results about what happens when you add negative density of dependence. Again, the punch line is if R is positive, persistence, R is negative, extinction. OK, but how about looking at diversifying models in terms of any different types that are interacting? So you might be wanting to know, if you have different types, how do you maintain that diversity? And there's sort of two classes of models that's been looked at fairly extensively that's been in the context of population genetics, maintaining a polymorphism, and also in the context of ecology, maintaining species coexistence. So some early theoretical works. I'm going to distinguish here and now theoretical work from mathematically rigorous work. Apologies that people find that offensive, but hopefully knowing that there is a distinction. So there was theoretical work on protective polymorphisms, so how fluctuations in fitness can lead to maintenance of polymorphisms in populations. And some of that work was initially done by Haldane in this paper in 1963 for haploid populations. And then John Gillespie in this delightful paper that I'll pay a tribute to at the end of this talk did stuff for diploid populations. And in fact, he originally discovered, although he didn't realize it, the storage effect in the context of diploid populations. And then, of course, in the context of ecology, species coexistence, there is the quite famous storage effect of a mechanism for maintaining coexistence between competing species through fluctuating environments. And of course, the champion of the storage effect was Peter Cheson. And it's gotten to the point now where we have data sets that we use to calibrate models, that we even do simulations instead of math with to understand how strong the storage effect is, as discussed in Elmer's paper. Now, the mathematical work, surprisingly, was fairly limited on this topic. So the main paper I can think of is a paper dealing with two competing species. These are discrete time models. And it was worked by Cheson and Elmer, both appearing in 1989. And then they were not much done for these understanding persistence of interacting populations. But quite interestingly, an invasion-based math for deterministic models was done for any number of species, any type of interactions. And what I mean by invasion-based, I just mean that you're looking at these realized that your growth rates for one of the species is rare, and using that as a way to determine whether the species coexist or not. So there was all this work done for deterministic models. And most importantly is the paper of Joseph Hofbauer in 1981 and then Butler and Waldman in 1986 that took different approaches to this question. And what I want to do is just show you that we've advanced a little bit in the stochastic theory because we can sort of recover the results of Hofbauer for stochastic models. OK. So what are the stochastic models I'm going to talk about than the ones of this form? So I'm going to keep track of k distinct populations. There might be different genotypes. There might be different species. And then xi is going to be the density of the i-th population. I'm going to have these auxiliary variables, which I'll explain with the odd in a minute. OK. And then again, a sequence of iid random variables. So each species is updating by taking its current density multiplying by its fitness fi, which depends on the state of the community. So x is the vector of densities community. Depends on this auxiliary variables. And it depends on the sequence of iid random variables. And then you have some dynamics for the auxiliary variables that depends on everything. So first thing you might ask, what are these auxiliary variables? I won't do the explicative here. So there can be lots of different things. One thing is we were talking about yesterday is the importance of auto correlation. And I said I'm using iid random variables, but you can encode for auto correlation because the y can be almost any type of Markovian forcing dynamics. So there's some deep theory that shows that almost any type of Markov chain can be realized by a random map with iid random variables. So in particular, you can get a multiplicative auto regressive model using this g function. You can get any finite state Markov chain using this formalization from forcing the dynamics, the ecological dynamics. You can also account for any type of population structure because those auxiliary variables can be keeping track of the fraction of a given type within a population. So it can have spatial structure, a structure, any combination you're of. You can also do trade evolution. So yi might be a trade of the iid population. And then you might have maybe a breeder's equation describing how that trade's evolving. Or you can also do environmental feedbacks. So you might have plant soil feedbacks. So maybe y is the soil dynamics. Or you might have some environmental feature that's characterized by a y that is being engineered by the species and the population. So any of these things can be encoded in these types of models. So it's fairly general, is the point. So for these models, what do I mean by coexistence or persistence? So there's two definitions. One of them is sort of one that I highlighted initially because initially I could only prove theorems for this definition. And the other one was highlighted initially by Peter Chesson in the 1980s. So I'm going to talk about the definitions that I worked with first because that's the one that I think is somewhat more natural. And it deals with the fact that often we're looking at a single realization of a community or a population. So we'll say persistence almost surely means for almost every realization of a model. And you can just look at the right and ignore the left. And so if we have a model with no auxiliary variables and two species, x1, x2 are their densities. And we can say, well, this gray region is within delta of extinction for one of the species. If you run the model for a long time, you can ask what fraction of time does that simulation spend in that gray region close to within delta of extinction set. And stochastic persistence says that fraction of time goes uniformly to 0 with respect to initial conditions. This delta goes to 0. That would be stochastic persistence almost surely. And the complement is the ensemble point of view. So you run many copies of your model, where you have many replicate experiments. And so stochastic persistence and probability, which was introduced by Peter Chesson, is you take many realizations of your model, run them far into the future. So I'm just showing the final time point for those many realizations. And you ask what fraction of those realizations far into the future are within delta of the extinction set. And you want to insist that that fraction goes uniformly to 0 with respect to initial conditions as delta goes to 0. If that happens, you have stochastic persistence and probability. And fortunately, for all results I'll talk about today, when you have one, you have the other. So. Thank you, Kevin. What was that the other reason? Let me adjust that later. I'll have to think about it. I think you can. So Peter Chesson, early on, explained why he thinks this is the right definition of persistence, stochastic persistence. And basically, he realized that many models have good years and bad years. Good years, the population increases. Bad year, the population decreases. And therefore, with probability 1, you're going to have an arbitrary long sequence of bad years, which means you can get to arbitrary low densities. And you don't want to exclude those models from what you're thinking about. OK, so now how do you check the stochastic persistence condition? You do with the sort of invasibility criteria that many people might be familiar with. So what you think is, OK, suppose I have all but a subset of species become very rare. So I have, like, say, J1 through JL that are not rare, and the rest of the species become rare. So that's infinitesimally rare. So they're not there. So we're worried about them not being able to stay in the community. And so the rest of the species, the ones that are there, the J-sub-i indices, they're going to probably approach some sort of stationary behavior. Let's assume that that stationary behavior is even robotic. That's the only stationary behavior we'll have to pay attention to with some law of mu. So that means visually, here's all these species that are not rare. They're approaching some sort of stationary behavior, as well as the feed boxillary variables approaching some stationary behavior. And now you want to know, is one of the infinitesimally rare species able to increase? So what do you do? So you look at this axis with the infinitesimally rare species. And you want to know what is its rate of increase. So it's just this linear question. And it turns out all you want to do is take the log fitness of that species and average across all the fluctuations. So that's the fluctuations in the chi's and the fluctuations in the community state, as well as the feedback or the auxiliary variables. So that z is this whole combination. And if that quantity is positive, that means this infinitesimally rare species would tend to increase exponentially. If it's negative, it would tend to decrease exponentially. But you can also compute this quantity for all the species that are not rare, that are present. And it's not that easy to show. But one can show, and it's fairly intuitive, that their per capita growth rates are always 0. They're present in the community. They're not going off to infinity. They're not going to 0. So on average, it must somehow be 0. OK, so now how do we use these things to state a theorem about persistence? So let me first define the set that we're trying to avoid. That's the extinction set. So that's going to be the set of points in the state space where one of the species is not present. And here's the criterion that's going to be used for stochastic persistence. It's the exact criterion that Joseph Hoffbauer used for deterministic models, very slightly differently, because he didn't do it with respect to measures. But he can do it with respect to measures, even in the deterministic setting. And what it says is, if for each species, you can associate a weight, p1, p2, p3, and so forth. They're positive weights. They could add up to 1, if you want. And you take that fixed weighted combinations of the realized per capita growth rates of the species. So that's saying, somehow I'm taking some community average of the per capita growth rate. That's roughly what that is. If that is for that fixed weight combination, if that is positive for any sort of ergodic behavior of a subset of species, so not all species, then you have the following theorem that if this uptribe of the whole positive condition, then you have stochastic persistence and probability and almost surely. And you even have estimates about how much weight is being put near the extinction set, which I won't show here. On the other hand, if you can flip the condition around, if you can find weights, so that this weighted combination of the per capita growth rates is always negative whenever there's a subset of species supported by the ergodic measures, then you get the opposite result, which is the extinction set is locally stochastically stable, meaning if you start sufficiently close to that set, there's a very high probability that you asymptotically approach that set. And again, these are models that are continuous states, right, continuous densities, so you can only asymptotically approach them. OK, so that's the theorems, or those are the theorems. So let me now briefly show you a few examples, just to give you a flavor of how you can use them, and how they might even relate to things that you're familiar with. So the first example I'm going to talk about is a annual plant model. And I'm not going to write down the model. It's just a model of an annual plant, keeping track of the seeds, OK? And it's a model of two competing species, and it's a model whose can be analyzed using results of a wonderful little piece by Peter Cheson in 1988 and some lecture notes in spring of ergodic. And so it's two species that are competing, and he asks, how do I use that criterion that I mentioned before? So we have species one here. Right now, I'm not having any auxiliary variables. And let's assume each species by itself can persist. That would mean R1 at the Dirac measure at 0 is positive, and R2 is as well. So they both can persist by themselves. You have something like this, OK? Now you want to know, what about each species invade when it's rare? So you have this quantity here. I call this mu1, this mu2. We want to know it's R1 of, so R2 of mu1 greater than 0. We want to know it's R2, R1 mu2 greater than 0. And it turns out this would be the classic mutual invasibility condition. Why is that the same as what I wrote on the board before? Man, that's not obvious. But what do we need? We need to weigh it's P1 since it's R1 of any measure. That's P2 times R2 of any measure is greater than 0. So that's trivially true at the Dirac measure of the origin, because these both are positive. Here, if we look at this measure, by that result I mentioned, the R1 for species 1 is 0. And that's just saying you need R2 to be positive for persistence and similarly for this one. So the mutual invasibility criterion is identical to the criterion I put on the screen earlier. So if we apply that to an annual plant model where it's you have neutral coexistence without any environmental stochasticity, you might not really think, oh, if I add environmental stochasticity, you can get sort of a drift along that line that might lead to extinction. That's not the case. But what you get depends very much on what is fluctuating in time. So you can use the results of Chesson to show for large or small noise. It doesn't matter if it's the size of the noise. It's just there is noise. That if it's fluctuations in survivorship of the seeds, you get both of the R terms are negative. So you get a bistability. So with positive probability, you go lose species 2. With positive probability, you lose species 1. And then you can also show with probability one of those events occurs. On the other hand, if you have fluctuations in germination rates, you can actually show that R1 and R2 are both positive when evaluated at the opposite measures. And then you get stochastic persistence. And you can even numerically estimate the stationary distribution of that model. And in that case, there is a unique stationary distribution. That's one example. But it's already covered by the older theory. It's two species. OK, what about doing more than two types? How about rock, paper, scissors? That's a simple two-dimensional thing, but with three types. So that's like the classic childhood game, except in a replicator context. You have three types of individuals. They replicate true, like rock, paper, scissors. If you win, you get a payoff bigger than one. If you have a draw, you get a payoff of 1. If you lose, you get a payoff less than 1 and those fluctuates randomly. Then what you can show is that you get a heteroplastic cycle on the phase space. So rock gets displaced by paper, which gets displaced by scissors, which gets displaced by rock. And then you can show that if you evaluate these realized per capita Earth rates at the only ergodic stationary measures, which are these Dirac measures at the equilibria, you get the following criteria for stochastic persistence. The product of the expected value of the log winnings plus the product of the expected log losings has to be positive. And that gives you stochastic persistence. Here's a simulation illustrating that. Oops, went too far. There. And if you have the reverse inequality, you get asymptotic extinction. So I started here. With probably one, you're going to approach the heteroplastic cycle. So for the last thing, I've got two minutes, I think for three minutes, based on what I'm starting a little after. So the last one is just what I want to talk about, because I actually just got accepted this morning. But I didn't write that this morning. But it's something that, for whatever reason, I obsessed about for a period of time. And I still find it fascinating. And maybe you'll see there's some merit in that or not. So this is going back to one of the first models that exhibits the storage effect, which preceded the ecological example. And this is John Gillespie's SASCFF model. You can see what a wonderful acronym. No wonder it took a great hold, which it did. And the basic idea of the model is you've deployed individuals. They have kaoleols at a single locus that determines the genotype. And those kaoleols contribute additively to some sort of phenotype or physiological activity of the organism. And then you assume that the fitness of an organism is an increasing function of its phenotype or physiological activity via some C2 function. That's the C and CFF. Actually, it was originally a concave, but it's the C2 in my context. And then just to make it ecological, you can even have the population has density dependent mortality. So you have population growth there as well. So what's the state space for this model? So if we had three alleles, we'd start with the frequencies of those three alleles. So it's just like before in the rock-paper-scissors game. But then we have to account for population abundance. So we have to go out into the third dimension. So that's our state space. It's this solid cylinder. And you can ask various things about, do you get the alleles coexisting? Do you get the population persisting? You can address that with the theory I just described. So in particular, some of the cool results that you get from this is, first of all, get explicit expressions, which we realize per capita growth rates under the right assumptions. I want to tell you what those are now. But some of the results I found really fascinating are you get opposing effects of many factors of this model, features of this model, on maintaining diversity of alleles versus maintaining the persistence of the population. So you get these opposing effects. Specifically, the per capita growth rate of the population gets better the more convex or the less concave the fitness function is. But that has a detrimental effect on maintaining allelic diversity. The number of alleles has also a detrimental effect on another allelic establishing. But it actually, through a bed-hatching mechanism, leads to high population growth. And environmental variability has a negative effect on population growth, but helps maintain the alleles. So you get these interesting opposing effects. And you can even get hidden in these statements of Perando effect, where it might be that every homozygote individual is maladapted and has a negative per capita growth rate. So if you just have one allel, you're going to extinction. But if you have enough alleles around that aren't perfectly correlated in the responses to the environment, you get the population persists. So that sort of losing alleles can make a winning genotype. So with that, I want to finish by saying, hopefully, I've given you some sense of things that you know already. So it's not. In other words, the environmental processes you can inhibit population persistence. I didn't talk about how it can facilitate population persistence, but it can. It can facilitate or disrupt coexistence. And in particular, the mathematical method is that these conditions for persistence and extinction are determined by this weighted combination of realized per capita growth rates. And that applies to discrete time models, like I discussed today. It also applies to stochastic differential equations. In particular, there's a recent paper by Alex Henning in Deng Nguyen that really shows this like was a simpler version of the result and a much more beautiful, more complete result that includes extinction results. And it also follows from a very, very general theory for persistence, but not extinction, by Michel Ben-Eim. And that very general theory, which is available on archive, also covers piecewise deterministic markup processes. So we have random switching of vector fields, but much, much more. I mean, almost any stochastic ecological model that's finite dimensional will fit into this sort of framework. And so it's a beautiful preprint. And I hope someday it shows up somewhere as a monograph. And with that, I want to first thank US National Science Foundation for funding my collaborators. And then more importantly, Andrew, Alex, and Mark for organizing this wonderful workshop, Burrs for Hosting. Even more importantly, you for listening. And then I'd happily answer any questions that you might have. Thanks. It seems like your model is well set up to deal with things that have a trend, sort of temperature is environmental change, sort of your wide variables. And what effect does that have on the kind of analysis you talked about? What do you mean? So like a trend? Like a trend. Yeah. So the trend will not be in the asymptotic analysis, right? So even if it's, you know, even if you're having like some of those models, even if it was an autoregressive model, right, you might be starting not in the stationary distribution, but asymptotically approach the stationary distribution. And only the asymptotics will then determine, right, whether you have persistence or not and the way it's been defined here. Yeah. So it's time goes to infinity. Yeah, so everything is about, has time goes to infinity. Just like we saw the other day, even to deal with how to talk about even transient trends in some models, it was about time going on infinity, right? So you're still doing asymptotic studies. So maybe there's some way of combining these things. Like I've intrigued. Like is there some way of doing what you're doing in this sort of stochastic context, right, to address something where you're still doing sort of asymptotic analysis? Partially. It's not just that. And I realize it's more complicated to sort of understand something about trends. Yeah. But yeah, it doesn't, I mean, it doesn't have an effect, is the answer. As far as the statement that you'd make, it might have an effect on the actual stationary distribution, right? And so understanding that stationary distribution is a much harder question. I don't know good ways in general to compute them for these higher dimensional models, unless you have a lot of symmetry. If you have a lot of symmetry, then you can. Like for Gillespie's model, without the population dynamics, you can actually do a stochastic differential equation approximation and write down explicitly the stationary distribution under certain cases. So there's special cases. But there's always something you have to sacrifice to get the explicit solution. Or at least it seems that way. Yeah, have Sadiq? You showed us for an example that your criteria for persistence is equivalent to the invasibility. And how much generality is that true? Better equivalence? Well, first, the first question to that question, the first question I'd have to that question would be, what do we mean by mutual invasibility when there's more than two species? So that's a question, right? So you could answer it two ways. I thought about it. I'll give the answer. I won't put you on the spot. So you could think of it one way. You say, well, I insist that every missing species has a positive per capita growth rate. That would be a very generous interpretation. And if that's true, then immediately get stochastic persistence. So that's sort of an easy case. The other thing you could say is, well, that's too strong. Because if I look at a predator-prey model, if I look at the origin, clearly not every species has a positive per capita growth at the origin, because a prey has a negative per capita growth rate. So maybe all I need is at least one species has a positive per capita growth rate. That's not true either, because the rock, paper, scissor game gives a counter example. Because every species, every equilibrium there has a species with a positive per capita growth rate, but you don't get stochastic persistence unless something else holds. So I would say, when you say, does it extend? The answer is, I don't know what we want to mean by mutual invisibility. It does, if you say, all missing species should invade, then it's immediate. It's true. And if only one missing species is invaded, then it need not be true. So it's more subtle. And there's still things. It actually, there's a fairly open question to get things more in line with the deterministic theory. So the deterministic theory is much more refined in some ways in the stochastic theory. And one of the ways it is is that Joseph's original theory doesn't cover scenarios that we know how to cover in the deterministic theory. And that's because in the deterministic theory we take advantage of Morse decomposition. And while there exists Morse decomposition from random dynamical systems, it's not clear how well you can use them for this type of analysis. So that's sort of this open piece of work of trying to actually refine this theory so it's actually completely parallel to the deterministic theory, which it's not. So there's a gap that needs to be filled for sure. Yeah. Were you ever ready for that one? Sorry. You had something that's like one of the questions I've been really thinking about. So yes, Chris. So if you can do mutual invasibility, then you can do adaptive dynamics. And then you can do evolution. Yep. Yeah, you can do that. So I mentioned as one example that you could do evolution. I mean, I don't. So I mean, the population genetics model is evolution, right? Changes of frequencies. So I did already evolution example. In the paper, we do another example where we have a trait that's evolving. So we do a version of this model of Landy and Shannon, which is sort of an evolutionary rescue model. It's not really well, just let's say evolution in a fluctuating environment model, but through a quantitative trait with a Landy approximation. So we do an example of that. And there you can compute things explicitly because you end up having an autoregressive process describing the thing. Well, anyways, you just have an autoregressive process that determines essentially the population per capita growth rate of low densities. So you get some interesting predictions there. They're in line with what people seem doing. What I'd say more theoretical. In other words, the same type of computations, but without the mathematics to back up that this gives you persistence or not. So you could argue about whether that's an important distinction or not. You could see arguments in either direction. And I think you had a question. So OK. OK, so how can you extend the theory if you have things like any effect? So that's a great question. So even for deterministic models, you don't have to. Well, it depends how the Ali effect shows up. There are cases where you could have an Ali effect and you still get permanence. But in general, if you start having attractors in the boundary, you don't have persistence. From the perspective of this theory, because the invasion growth rate or the capital growth rate at low density is negative. So that just means this theory isn't appropriate for things like trying to understand systems with Ali effects that really lead to species being lost because they're low in density. And just purely because of that feedback. So there's no question this theory doesn't apply there. That's a criticism that applies to the deterministic and the stochastic theory. One thing I should mention though is if you do, there is a difference in the stochastic models versus the deterministic models. So I have this paper that's coming out in the ecology where we look at a model where there is effectively, we're looking at positive frequency dependence in models of competition. So that leads to Ali effects where you have either competitor being lost. And as soon as you add any sort of non-local environment to cacici, of course, with probably one, you lose one of the species. In some sense, models of the Ali effects, if you have non-local noise, meaning the noise really can push you anywhere in the state space, like the SDEs or like that, for instance, then you're going to always have a result. But it's going to be not an interesting result. You always go to extinction asymptotically, which makes sense. But it might not be what you're looking for. That's not how it shows. OK. Bill, are you ready? Yeah, thank you very much. Very nice talk. What's one question? I mean, in deterministic dynamics, are mutual invisibility normally guaranteed coexistence, but it might not, for instance, if you ever said it on the family? Yeah, that's just like it was, yeah. He was asking the way. Yes, yes. So yeah, but I said, it was quite different. Does it not happen in the east? No, I said it. I just answered that effectively. But I can repeat it. Sorry, I didn't say it clear enough. But if you have a situation, so if the deterministic model hasn't attract on the boundary, for instance, right? Or you used to say, oh, I see what you're saying. It's slightly different. Oh, yeah, this is a different interesting question. Interesting point. Sorry, I just realized what you meant. OK, so you're talking about a situation where you have something like this, right? Is that what you're talking about? Yeah, so you have to have the set of bounds of boundary. Right, so the set of bounds of boundary. Right, so you come back to it. Yeah, so this is a cool little thing about. So the deterministic model, this would mean you don't have permanence, right, just by the definition. However, in stochastic model, you're not, even if you had a little bit of noise, you're not going to pick up this asymptotic behavior. I mean, it depends. I have to be a little careful when I say that. Oh, that's a stochastic system. Yeah, because if I had a stable point here and a stable point here, most likely with small line of noise, just by various results, the stationary distributions are going to concentrate on those stable fixed points. So you won't be evaluating the stochastic growth rates here. And as a consequence, depending on what's happening on these other fixed points, you still might get stochastic persistence. And so you can construct examples like that, where there's a two-bystable prey and a predator. You usually get situations just like you were described with. Or maybe the prey can invade. That would be a weird situation, but you could create it. The predator can invade when there's too much of the prey, when they're at that coexistence equilibrium, but still the whole system is stochastically persistent. So yeah, so that's an interesting point. There is a way that you're getting rid of things that you probably don't think are relevant for the permanence condition. But that's just the way the permanence condition works. That's actually nice. It may be. The possibility of these systems is removed. Yeah, I think so too. I think it's interesting. I think so. OK, Heather. Question. OK. Very inspirational talk. Thank you very much. Just one question. How much of a result would you still fault if you change white noise to color noise? Well, as I mentioned, the y variable can account for color noise. So you can be forcing. You take white noise, you can color it by just applying a function to it. Pretty much color 332, because with color noise, it can put your system over the variation of the walls. Yeah, same with white noise too. But anyways, it does account for color noise, is my point. So you can have the auxiliary variable be like multiple places. Sorry. Both have very unregressive process, and that's what color noise, not any color, but many colors that you might give.
|
The dynamics of species’ densities depend both on internal and external variables. Internal variables include frequencies of individuals exhibiting different phenotypes or living in different spatial locations. External variables include abiotic factors or non-focal species. These internal or external variables may fluctuate due to stochastic fluctuations in environmental conditions. The interplay between these variables and species densities can determine whether a particular population persists or goes extinct. I will present recent theorems for stochastic persistence and exclusion for stochastic ecological difference equations accounting for internal and external variables, and will illustrate their utility with applications to models of eco-evolutionary dynamics.
|
10.5446/57606 (DOI)
|
Andrew, for the invitation, for the organization these days. For those who haven't yet introduced each other, I'm Basilis Dacos. I am a researcher at the CNRS, that's the French National Research Center. At the moment I am at the University of Montpellier in the south of France. Today I'm going to talk about measuring stability and detecting tipping points. I know these are very big words and big buzzwords. But ecologists, I mean ecologists basically, we are fascinated by measuring stability for understanding ecosystem responses to stress. And of course when you say you want to measure stability, as Alan said yesterday, this is a very big long discussion. It's like what do you mean you want to measure stability? What type of stability are you interested in? And there was this very nice paper back in the late 90s by Grim and Wiesel, where actually they have recorded more than 70 definitions of stability, aspects of stability with 163 definitions. They try to put some order into this big bubble world of ecological stability. But fun enough it seems that every decade, one big review, a big review means at least one that is finally appeared, that tries to put some order and tries to put actually the same things together in a bit different words. But what is interesting for me is like this, it starts already from the 70s, it was like some work that was in the Netherlands, where actually when the both May and the buzzwords were participating, and already from there you see that of course everybody is realizing that there are different ways of defining stability. But you see there seems to be some kind of consistency across the words and across the decades. And basically you can say that it converts into let's say four characteristics, like one is interested in constancy which perhaps is related to the variability of the ecosystem, one has to deal with resilience, engineering resilience, like how fast we recover after being perturbed after a pulse perturbation, and it talks about resistance, you have a press perturbation, you have a problem with the water, you go from your original equilibrium, and then this is more complicated, a lot of things together, what if you have other types of behavior as you're changing conditions, like either you are shifting to different states, or you start oscillating from a stable equilibrium. And actually for us it was very interesting, back in Montpellier with my colleagues, to revisit basically this notion, but more particularly see not so much about the concept, but see how people are measuring it, because on one hand you have the concepts, on the other hand you have different metrics of measuring these things. So actually we reviewed for almost, you see so many papers for the last more than 100 years, and we limited our selection in some ecological journals, because otherwise we wouldn't be finishing this review for many years, and we focused on both of them, and here you come to red camp papers, that worked out to do with at least more than two, three speeds of, where you can have at least some sense of community. So what we found was that if you see this is results by decade, you see that the metrics we have been using to quantify this different concept of stability, keep increasing. And actually since 2010 we have at least 34 different metrics that have been proposed, theoretically and empirically. So you have to imagine like a third, a third, thirditism can do something with a coven matrix, and empirics cannot do that, finds his way to go through, you know, and like kind of invent or reinvent something that was already there before. And of course if you see how these things are distributed, you see that most of them are used very seldom, rare, like maybe that's like one time in that study there was a good thing to do, but others are very much used, and these are more the empirical studies, I guess, on this side. And just to understand if you can read it, but coefficient of variation, like variability and resistance, seem to be mostly used, and then you have this persistence, recovery, and more kind of the familiar terms for stability. We can look a little bit more into this review if you like, just for a warm-up, where actually, of course, when you want to study stability, the classical question is like, okay, you have to define stability of, to what, what type of perturbation in particular, and stability of what, where is your response arrival? What we saw was like, that you have the classical pulse disturbances, press disturbance, one way disturbance, press disturbance that is constant and maintains itself. NULL here means that there were studies where there was no disturbance, but people were comparing for instance different treatments, temperature treatment, nutrients, and there were measurements of stability, and noise where there was some kind of stochastic environment is mostly the rectal work to try to measure stability. And you have different levels where these responses were measured, either at the whole community, that's like, you know, total biomass for instance, or the total number of species. Partial means like, I'm a functional group, like you can measure only what happens for your primary producers for instance, or whether you do this analysis for every species directly. And this arrows, if you like, means how many of this type of perturbation results, you know, like we had to do with our own community, or with partial functional groups. Anyway, the point from this is like, that you see there is also like a, people are doing, are studying different things, most of the things are happening at the global community scale. And what is also interesting was that we recorded that there's perturbations usually per study, people are choosing to study one per four perturbations, so like basically one to two perturbations to study stability. Most of the times it was just one metric that was used, although we had, we said we can have up to 34, and we have at least four to five different concepts of stability. And interestingly, only 2% combined theoretical with empirical measures, like try to do something what the theory says and try to do it empirically. I know it sounds low, but that's what we found. So for us, the question is, so we have all these metrics, and it's difficult to measure of course all. So which metrics should we choose perhaps, you know, if we want to go forward to describe the stability of the overall community. So this is of course something that was, it's already there for quite some years. Donohue and colleagues summarized it in a nice way, basically saying, okay, let's say you have three dimensions of stability, or three metrics, three different important components. I'm using these words a bit fuzzy, but you know what I mean. So if you assume for instance that here have robustness, resistance, and variability, if these pairwise correlations between these different types of metrics or components are zero, there's no correlation between them, that means that all of them are important. So all of them will define some kind of volume where that defines the overall stability of your system. But if there, this, there in the case that there is some correlation between two these variables, that means that you know perhaps one of them is kind of redundant, that stability, you don't have to exhaust all this stability, all this volume, all these dimensions, you know, to have an estimate of the overall stability of your system. So we tried to follow a little bit on this idea, and what we tried to do is we tried, sorry, together with Sonia Kfie, who is a close colleague for me in Montpellier, and our postdoc, Virginia Dominguez, we tried to quantify and use the correlations between these different metrics that are suggested in the literature, and try to see whether, by looking at the correlations, we can perhaps, you know, like quantify or get an idea of the dimensionality of stability, for ecological communities, for ecological networks. So more specifically, I'm not going to show any equations, but we generated footwebs using this model. We used the simulations by using by energetic model with allometric scaling, the classical use in this framework, and then we simulated communities with different size of different number of species, going from 500 to 100 species. We randomized parameters, so we get, you know, kind of not exhausted, but kind of more complete representation of the parameter space, and we only focused on solutions that gave stable equilibrium, just to make it clear, so one size basically. So every time we had simulations, they were not giving a stable equilibrium, we didn't consider this type of communities. And then we actually estimated 27 metrics that we found that were kind of frequently used in the literature, and we measured, as I said, the cross-correlation between, pairwise cross-correlation between metrics, and we used run correlation, meaning we just wanted to know, like, if that metric increased, also, you know, that other metric also increased. We didn't care about, you know, like, the actual size. So this is like the results, in a way. This is for a community, for community size of 45 to 55 species, more or less. These are all the metrics. I don't expect you to make any sense of them. I also myself don't make sense of them. This is like, actually it's quite complicated, but I put here some examples, like, for instance, you know, we measured cascading extinctions after the removal of a species. We measured the symbolic resilience. We measured invariability, which is like the inverse of variability in the community. We measured resistance of total biomass to the species to removing species extinctions, or tolerance to mortality, like changing mortality of a species, you know, how at what level you will cause a secondary extinction. This is related somehow to changing up our rank during the system and to define the structure of stability of the community. These are just examples. You don't have to look at what these metrics mean. Basically what you have to see is like, everything seems to be red, but red means positive correlations, which sorry I didn't say. And only very few, I don't think you can see it by eye, kind of lighter colors, maybe here a bit yellow, like negative correlation. Everything seems to be kind of positively correlated at a different degree. More interestingly, there seems to be also a relation of community size with the correlations between the metrics, which is not surprising to some extent, because we all know about the stability, diversity, debate. So, and but we have different scenarios or different behaviors, you know, like you had no effect. You had from higher to lower correlations or you had correlations that really increased in plateau or going from negative to positive. But most of the, in most of the cases, the biggest difference was for the smaller communities. So what I'm going to describe to you next is like communities are the lack of this middle range, where in most of the cases, the correlations kind of leveled off after that point. And it's also a reasonable size of community to study and want to talk about complex communities. So not just like five or seven species. So what did we do with the correlation? We said, okay, let's use a modularity-based algorithm to discriminate groups. So let's, the modular algorithm means that it's going to divide all the correlation, oh, sorry, all the metrics based on the correlations, and you will form metrics where the correlation within the group is bigger than the correlation among groups, right? So that's what we let the computer run multiple times and we recorded the most frequent modular structure that could appear from all the metrics. So this is what we get. Good news, not too many groups. So at least we had four groups. This group is in gray. It's a little bit of an outlier. It was, some of the time it was belonging to these groups, some of the time it was belonging to that group. So we can forget for the moment about it and we can concentrate to these big three, or bigger three groups. The first one seems to have to be related with measures that had to do with this early reactivity of the system to pose disturbances. So it was like, it was the first transient behavior that like how far you go from the equilibrium after you are perturbed or at what time this happens. Here was, or maybe here, here was the other group of, I don't know if I can talk about this. This is what we call safety. This is what this had to do mostly with behaviors that the metrics that measure stability in terms of the system would remain structurally stable and or with measures that would relate to having low variability or low asymptotic resilience, meaning mostly let's say, the longer term dynamics measures of stability. And here was more what we call the sensitivity to press. It was more the metrics that had to do where we had press disturbances and press disturbances that would cause secondary extinctions or they would cause changes in biomass in the total community. So we kind of got a structure, let's say, that it doesn't still give us an indication that there are perhaps three groups that are important, so you should kind of choose one metric per group, for example. But still it was not enough because, okay, which metric do you use? So we did a little bit further here on here, plastering. Again, it's a kind of similarity analysis but different than the algorithm because now it looks a similarity between species. Not like on this group global level. So we were able to reconstruct most of the structure that we found with this detection algorithm except for this new resilience metric. But what we suggest or what we think is like we can use this type of similarity because then within a group, for instance, within this blue group of safe distance metrics, you can say that perhaps these groups, these different metrics are so similar that picking any one of these perhaps would be enough for you to estimate. In a sense, you reduce somehow a little bit still the dimensionality of the system and the choice of the metrics to use. So which metrics to use do we get an answer? Well, I wouldn't say we got an answer. We got a little bit of a better map towards the direction that we go and I'll show you why because we tried to... These are the three groups that we found with this detection algorithm. And what is interesting is like the kind of separate along what type of perturbation you have, whether you have buzz perturbations or press, and here you have this environmental statistic which actually is one metric, this invariability metric. At the same time, we tried to color code them according to what people think about the main stability concepts. And it was not so nice. It's not like all the things that we found in one group really talk about what people call resistance, for instance. You see, these are spread across the... And that's not a problem, of course. But at the same time, it makes the decision-making for which metric to use more difficult. So what I would perhaps conclude is like, okay, we can say we should use one metric per group, which one will, of course, depend on what type of disturbance in the system you are interested in, what level of correlation within the other metrics that I saw that you know from these maps that exist, or how feasible and practical it is to do this type of measurement. But at the same time, what was interesting, and perhaps also interesting for you because you are more mathematically inclined than me, was like some of these correlations have mathematical reasons to be very strongly linked. But that's not for all. So we couldn't find really a prior information that, yeah, yeah, it's very clear this would be all correlated because this and that. So there must be perhaps some latent links, or maybe they don't, they're not latent just for bad coincidence, which perhaps it would be good to clarify because this would help us better work towards that map of the most representative, the most appropriate metrics to use. So I'm going to shift now a little bit, and I think I have time. I have like five, six minutes, I don't know. Talk about how we can measure changes in stability. Again, I'm talking stability, but now I have something more specific in mind than just this generic stability. And I'm thinking about detecting a graph, ecosystem responses, and more particular, we have in mind dysflexic catastrophic shifts that they have been recorded, or they have enough theory to believe for mechanisms that can exist in shallow lakes or for coral reefs. In the case for shallow lakes, the idea is that we have these very strong feedbacks between the magrified vegetation and competing against the algae in the lake. And because of that, you can have basically lakes dominated by magrifies where they keep the water clear through different mechanisms. They make the take nutrients, they provide the refugee for zooplactone that appraise on algae. There's also myelopathy against algae. But when nutrient loading starts to increase some kind of threshold, some specific threshold, then the algae can really take off, and then this feedback works the other way around and facilitates the dominance of the algae versus the magrifies, and you have this shift that can make the water clear to the turbid lake. So what is nice is that we can represent this type of dynamics with these bifurcation diagrams of the fall bifurcation, the subtle north bifurcation, catastrophic bifurcation, where as we approach it, you get to this big tipping point that Sebastian was talking yesterday, and then you shift to the alternate state. And what I find fascinating is that these simple models can describe a similar type of abrupt responses for different scale of systems. So of course there are approximations of what happens there with this higher complexity up to the climate, but it gives some nice indication. And here what we were interested in is actually now we want to measure the shifting resilience. But here resilience is not anymore the engineering one, it's the ecological one, it's the distance from the separatrix, from the stable equilibrium. And this in one dimension represents the basin of attraction in the system. So what we want to do is to see whether we can detect this type of approach to tipping points by measuring this loss of resilience. And one way is to measure that, but what happens at least in these simple models is that you can also approximate this resilience, so this resilience is very strongly correlated with a decrease in engineering resilience. Basically what is called critical slowing down in physics that when you have colliding attractors and you're approaching this type of bifurcation, dynamic slowdown, because the eigenvalues are going to zero, so actually you are freeze, just like the corgioli freezes over the cliff before it starts going to the other direction. And we can take advantage of that phenomenon actually to develop a catastrophlex, which are not new, they have been around already from the, before with the initial work of Tom and then Clemore in his book talks about catastrophlex, that take advantage of this critical slowing down phenomenon, and this one we can translate basically to three types of indicators, that we can call them also early warnings if you want to think of them as, if you want to use them in that way, I'll explain that in a bit. So if we compare far and close to a tipping point, you can have three, the more typical indicators we measure based due to critical slowing down. One, a very simple is that the recovery type increases or recovery rate decreases, so the same day perturbation you see takes longer, the rate that you diverge back to equilibrium is, how is the classical definition of the eigenvalue. What's more interesting, because this requires a perturbation experiment, what's more interesting for me is like, if there's no perturbation experiment but there's still fasticity, then you see that the pattern emerges by itself, you see that this is so different than this, for small noise. Okay, for extra large noise, doesn't work that well, and we can plot the same information from today versus tomorrow, and you see that when you're far, there is no correlation, seems to, what happens today has nothing to do with tomorrow, but as you approach, you start to build up this auto correlation. And we're not interested in just snaps, we want reference, we want to see how this quantity changes in time, because we assume there's another line process, a rate induced perhaps, or something that slowly increases you to this verification point and you see this thing diverge. Good enough, we have a lot of tools to also buy now to do this both in time, but also using spatial data but I don't go into that, but I'm just going to summarize that the thing we want to do with this is the idea that we can use either this type of matrix, either for ranking resilience across different systems, sites, or different species that identify hotspots, like okay, this system is more resilient or it has a higher variability, so that perhaps that means a higher stability building up. And the other way is I can use it to monitor changes within the same system, and this is what I would call like an early warning or like a warning type of tool. And I'm going to give you one little example of this and I'll finish with that, Alan. So I think there is a lot of potential now with, not with this, but with having a lot of data coming in long-term monitoring data, remote sense data. So this is an example of applying this idea of trying to measure changes in variability in the climate. This is the typical projection for climate models. This is a mean temperature, but also it would be cool to see apart from how the mean temperature will look in the future, to see also how variable the temperature will be in the future. And not only how variable, but whether the variability of the future will be different than the variability of today. And if we can do that, then perhaps we can understand a spatial and temporal distribution of these changes in variability that can highlight what I would call like kind of hotspots of instability or of climate sensitivity. That's maybe necessarily a tipping point, but it can be that that's where the place will become, you know, more perhaps prone to extinction or perhaps more prone itself as a climate system to shifting to different modes of operation. So the way we did that was to take these climate model projections and try to estimate after doing something with the data. This is the future temperature record for one grid cell, let's say on the globe here. And we compare the variability of the last 30 years of the record as at the end of the 21st century compared to the 30 years at the beginning of the historical record. And we try to see whether the variability of the future would be different increase or decrease compared to the past. So we don't do this only for one model, but we have used 37 models, so we compared these results across different model runs. And what we found was like we were able to construct this is like, this is the mean-changing variability. The red would mean that you have an increase in variability, whereas the yellow a decrease in variability. And the past areas is where most of the models agreed because what was interesting is that all climate models don't give the same predictions in terms of how variability will look like in the future, which was a bit of a surprise for me. And the areas that seem to be more affected are these circles, the Amazon, the Sahel, the South Africa, and parts of Southeast Asia. And I think I know I'm running out of time, so, right? OK, so then I will just finish by saying that it was interesting then to compare these trends to some socioeconomic variables, like to also show the importance of this type of analysis, where this is a figure drawn from our data from the economist, where this is above the zero line, you have an increase in variability and here a decrease along the GDP per country. And you see that the most risk countries are the ones who are going to suffer less because if we assume that decreasing in variability is a good sign, whereas the poor ones will have, will suffer an increase in variability, which kind of confirms this climate injustice story. And try to show you how ecological dynamics, or how we can use ecological dynamics to infer something about the GDP and resilience and particular catastrophic shifts, but we need more work to do to do the math and also, as I thought you have a good toolbox, but now we need the challenges to apply this better on data. And I'm sorry Alan, I don't know how late I was, but I just want to thank my collaborators and you for listening. Thank you. APPLAUSE So, that's it. I think it's a really nice collection of metrics and we do take them at the beginning. Did you also check which of these metrics apply to sort of general systems in which are tailor made for a particular structure system? Because you would say, you know, I take this ecological model and I measure a certain variable that is in the metric, or you could say that metric applies to a very general class of equations. Did you make checkmarks and boxes for that too? I think we could do that, but it was very difficult even to group them into, you know, like a more broader category, whether it's about, you know, like resistance or it's about resilience, because you see that some people, the theoretical ones were belonging more to the class that you described, that in theory they could be applied for any type of model, but the empirical ones, there was a lot of innovation, let's say. So it was very, it would be very hard, I think, to do what you described. Would it be nice to have thought to get mathematicians interested? Sure, sure, sure. No, I mean, we can give you our database. I mean, maybe it's even online and you can try to make, because that's why I said that even with the metrics that we tested in the second part of the talk, when I showed this, we tested in communities, even some of their relationships, we don't know if they're like, how generic they are or not. So that's a good, I think I would start from there and then go to learn. Thank you. Yeah, so just a few questions. We'll start with the, I think your summary slide for the review of this ability. In some sense, I mean, that's a good answer, but in the sense that it's saying the safe distance covered three, I think you had this color coded, if I do that, if you go back to that, you mean that groups, right? Yeah. Yeah, they're going to be just passing the code. Just one. The table. The table. The table, okay. Yeah, so there, you know, you're showing the color coding and a safe distance one, in a sense, encompasses three entirely. Yeah. It covers the tractor, the constancy, and the recovery. Yeah. So I would argue that means there's correlations, pretty strong, that's a good answer in the sense there's not an infinite number of things at all in the fight. Yeah, sure, sure. But still, you know, like it's not that we, I'm saying this, we were hoping that somehow, you know, this that is conceptual more will kind of emerge by doing this kind of agnostic, okay, let's see how they correlate and then let's see how they group together. In that sense, you know, like that mapping is a little bit, creates confusion to us because, okay, here you can call it same distance, but it's one, two, three, four different things at the same time and then it depends on what type of of the samples you have, right? So, and I'm not saying the idea is like really to take one. I mean, already we can reduce from 34, yeah, from 34 we can already go maybe, you know, like to four, five is already quite big, but still really pointing which five, I think it depends as I said before, like what we can do and what the question is at hand. Yeah, right. I guess in a sense, like, your answer says there's a logical for all of us until the infinite. Yeah, yeah. And that's, that's encouraged you. Yeah, yeah, yeah, but it would be cool. I'm not, I'm not, we're not trying to do Grim like, okay, you're doing, I think it's nice that we're using their own ways or, but we're trying to get if out of this we can learn something more like, oh, actually that was like a very nice way of doing it, much better, you know, empirically or theoretically, let's give it, but let's find the relationship, let's find what's happening. It's not, okay, great. And that was very good to talk about. I think the second thing is with the early morning, so you are going to get that there's data over, but I would sort of say to do that time series stuff frequently, you can't, there isn't data. So alternative, but perhaps there's things like space where you may be able to come. Is there alternatives to this sort of time series base? Well, first there is more continuous monitoring device, at least for lakes that I know that can be used. Now the data, there are both. But things like long-lived fish and stuff. Yeah, okay, yeah. Yeah, yeah. Sir, sir, yeah, yeah. But what I'm saying is like there is, because there's more data emerging, I think now we have more chances to test some things and develop, you know, like ways and prepare more for that. And what I find like this example with the climate, for instance, that's not, I mean, now we can bring this type of, because we understand it better to different, also fields, not only ecology, you know, like the climate or like with the microbiota and the gut like they're there. And now we start to have not only latitudinal, but also longitudinal data like following communities of endostein bacteria in single individuals. And for ecology, I think the gradients on space, that can give another way of, you know, like having more data and increasing your data availability for testing some of these. I think they have time to show some work that has emerged like doing this or combining source information. Like lately they also combined trade data with abundance-based data. So like having multiple sources of... Yeah, it was interesting along those lines through the space of the tennis and football. So be intriguing to sort of do structural morning, we have behavior switches. We have super fast time scales. Yeah. What does that mean? I also wanted to say two things. The first thing is I remember the Eves and Carpenter paper, they had examples where stability measures were anti-correlated. The one that was going up while the other one was going down. And I can easily construct your families of models where this was happening for different stability measures. So I'm surprised that you always find positive correlation, or mostly find positive correlation. But there's another point which may be more important. And this is that I think we as a community have too much obsessed with these satellite biostability tipping points. Lots of terrible things that happened. Many species can relate to this and are going extinct as climate changes without your system ever running through the satellite modification. And I was wondering in this American study that you did in the first part of your talk with the 100 species of the footwear model, do you ever see biostability in the sense that no species go extinct or is it a tradition to your new state always linked to species going extinct? Yeah. So for the negative correlations, I have to look back in the paper, we find negative correlation when we have a small number of species. So it depends on if you look at smaller communities, bigger or larger communities. I will show you examples. Yeah, sure. Of course, I don't know what model, maybe they used a very simple competition model. We use now a footwear model. It can be that these results don't hold if you put it in a different modeling setting. I'm not sure. We use the footwear here. For the subtle note, I completely agree. It's not that often in nature, or not as often as perhaps you get the impression by what we talk about it. But the thing is that all these classes of bifurcation have the same signature, because what we find here, these metrics can be used for trascratical bifurcation, which is the more classical way of going to extinction. They can also be used to create great warnings. So I find that these tools, they are generic. They are not specific to, so does their handicap at the same time because they are generic that can help for other types of transitions. And the last one for the footwebs, no, we don't find this type of transitions. This is a very big question. It's a very difficult question. And it's something that we're trying to do now. We have been really, we find some conditions that can give rise in footwebs for alternative states, but it's in the parameter space for this, it's very, very specific. But this is, trust me with my implications. The early warning sign that you can see is that a population is a low abundance. You don't need to know anything else. Yeah, sure. I completely agree that you can use the mean. But still, you can, how you say, you can think it as an extra indicator, because if the meaning decreases versus the variance stays the same, maybe you think the population still does fine. You know what I mean. Oh, yeah. So of course there are a lot of reasons that the variance can rise, right? And it's a pretty long time reason. But I was interested, you know, then, you were going really quickly at this point, so I maybe just missed a thought. But the climate data you showed at the end, it wasn't clear to me that that rise in variance necessarily had a sort of mechanistic connection to your earlier work in rise in variance as an early warning signal. Was there a time that you missed a connection? We didn't miss a connection. We don't know. We don't know. I mean, these models are not analyzed to say whether you have, you know, abrupt shifts or not. Climate scientists, I think, are agnostic to this. And that's why I said, and indeed I went fast, that we have now this toolbox, okay, we had this idea in mind that there is something that's going to happen. But increasing variability, you know, like, or increasing memory, not something dramatic is going to happen, or if something more grass is going to happen, or if nothing is going to happen, but we're going to be in a world that is more variable and has, you know, like, that's already, like, from our perspective, the way we understand it, like, kind of negative. So when you were talking about communities at risk, you know, like, sort of, you know, the world of higher vulnerability, you really were just talking about, because of the rise in variance in and of itself, whether or not that is anything you can make something else. Yeah, but also, I mean, I'm not the climate scientist, but also, you know, climate science can also try to look why in these regions we have an increased variability. That doesn't mean that there's some specific feedback operates, you know, like, as they do these scenarios where they pump up CO2, you know what I mean? So that's a different, so there are two elements. One is we're trying to connect this type of work, you know, a little bit more nuance, this I think is the word, you know, like... Last quick question. That's a follow-up on the last question. I would like to understand about your smart that you had with the future variability. Can you put it down? Sure. Well, I agree. So what was the trend over the next 50, 60 years that you had to know? There must have been some input, like, you run the model... Oh, yeah, yeah, we do the scenario for 8.5 BPM, the BPM in the atmosphere. We do the scenario, sorry. Okay, so what can you explain to me what that means? That means that we assume that this is the rate of introducing CO2 in the... Okay, so the drivers increase in CO2 into the atmosphere. So I look at the map and I see that if you go to the next slide, what strikes me here is that there are some regions where the variability increases a lot, but there are some regions where it actually decreases a lot. Yeah, yeah. And the inside whatsoever, because I... do you believe it? Just a while. I'm not seeing it by this map. I believe that it couldn't be everything the same. That there should be special... Yeah, but do you have any insight whatsoever why... No, no, no. You should ask Tim, because he has a... I'm not seeing him yet. Yeah, like... Places that go up a lot are......fractionally least are less seasonal. Places that are highly seasonal......are less seasonal. We did it for different periods of the year. We did it for the summer and winter for the sake of... There's a massive seasonality in the north and Europe and it doesn't correlate at all. There's a decrease in the variance. If I look at the central north, eastern Europe and so on... This is the annual... This is the annual... The annual mean... We understand that. The rich get richer and poorer. So that's a good sign that let's thank you all. APPLAUSE
|
Understanding stability of ecosystems and communities has always been major challenge for ecologists. Definitions and measures of stability abound and at times are confusing. Nowadays it is in general accepted that stability is multidimensional and it needs to be measured in different ways. Some of the metrics are used to highlight resistance of ecological systems to a specific type of perturbations (like an invasion of an alien). Others have been developed to highlight the approach to tipping points (that is catastrophic transitions between different dynamical states). As long-term data become increasingly available and experimental approaches are improving, the challenge is how to apply our theoretical metrics on these ecological dynamics to understand stability. In the talk, I will present a possible way for identifying best suitable metrics for measuring stability in ecological communities. More in depth, I will also focus on how changes in dynamical properties of ecological dynamics can be used as early warnings to abrupt ecological changes using examples from ecology and the climate.
|
10.5446/57607 (DOI)
|
And I'm going to talk about two models and optimal control involving these two models, but I want to actually try to convince you that maybe this is just to illustrate some ideas and that there's other control ideas here and approaches that you might take. And so first I'm going to do a brief introduction and then I'm going to talk about a river model and that's with, so this is interesting, it's what the student of mine, she finished recently, now she works at the Department of Events and then the second one is with Mahir Demir and the interesting part about this is he's just recently finished his degree at the University of Tennessee, but he's now at a postdoc at a fishery group in Michigan State and the interesting part here is we're choosing to model the black sea and he chose this himself because he's from Turkey. So when you start thinking about optimal control you've got to decide sort of whether you're doing ODs or PDs or Interim Difference Equations or Stochastic Difference Equations, what you might be doing or discrete models or some combination thereof, but you want to think about how you can manage the system and you want to choose some format to do the managing and what type of controls you're going to do. And whether they're realistic, I think this is important to think about sometimes, I've done things where I got a control but I'll talk more about this, it wasn't really realistic to implement and then I had to somehow approximate it. And so again, you think about your system and what type of control action you might take and it could be like a source term or a rate or whatever you might do and then you want to have a goal and many times a goal has trade-offs between two things and opposing factors and you can also decide to put in some type of clause that runs throughout the whole time or you could put something also that's at the final time like you're trying to save the fish at the final time or something like that. And then the way I'm normally doing this is that I'm dividing necessary conditions and computing optimal controls numerically. And so I just want to remind you that Ponte Ragon developed a lot of this theory in the 1950s in Moscow and that they were for ordinary differential equations and if you haven't seen this before just to remind you his key idea, he introduced what we call an adjoint variable and the way to think about an adjoint variable, it's likely to go on to multipliers, not to variable calculus when you're like trying to optimize a function subject to some surface constraint. And so the idea here is that he's going to use, we're going to use this adjoint and that's somehow going to attach the underlying dynamics to the goal function. So it's sort of attaching. So you'll see that the adjoint will sort of be interpreted as sort of how the goal is affected by changing the underlying dynamics underlying states. So it's sort of like called a shadow price of the state in terms of your goal. And so instead of thinking that we're going to have some system that we're controlling and a goal, this theory at least for ordinary differential equations converges to looking at a Hamiltonian and I'll talk about that a bit later but basically Hamiltonian and you can actually optimize that with respect to the control at each time. But when you want to do PDs it's a little bit more complicated but I want to convince you that if you ever want to try this, a lot of the PD theory has been worked out already meaning that you have a reasonable system, a parabolic or whatever you might be doing or even first order H structures, a lot of this theory has been worked out so you don't have to reconstruct the theory and you might not even need to do A priori estimates to justify it all. You might be able to just do it because maybe what you're thinking you might want to do is some of that's already known. So when you think about this you have some PD or a system and you want to have a control and you have a goal functional, so an objective functional. And we don't even have to practice to worry about this is really a weak solution space. But when you want to do the necessary conditions it's sort of like calculus, you say oh well I got a goal and you put it into your goal, your objective functional. So you have a control and you have a goal. You want to differentiate that map just like calculus. And so if you want to differentiate that map well that seems like we could do that. And so what's tricky here is the state which is like the population. It actually also feeds into this goal. So somehow it's like a chain rule that if I'm going to differentiate this I should know how the control affects the state. And so that's what you sort of do first. And this is really quite not too bad. Really think about you have some control and you put it into your PD or your OD. And really the idea behind that is to sort of like linearizing your PD about the control. So it's really, you know, it's going to solve a linearized PD. So it's really like differentiating your PD or your OD through a restricted control. So truly not bad and this is pretty straightforward. And again, you know, trying to, for most reasonable PDs that's well justified and you don't have to maybe worry about the A for estimates or anything to justify it. And then when you do adjoints, so the point is you have a linear equation. So you have to go back to here. You know, you got this linearized PD and it has some operators in it. And so you can take the formal adjoint in an L2 space and maybe, and that's also pretty straightforward. And just to tell a little bit more about this, your original problem will have initial conditions usually. And then that means your adjoints which are called lambda here will have final time conditions. And the main thing is this is what I was talking about that somehow the adjoints are driven by how your goal changes with respect to state. In fact, this is a precise way to do it. Whatever you put into your goal function, the integrand of that, the derivative that was sent to the state, which is your population, is the drover. Then I don't know what you mean as termed drovin' it. Okay, so then you've got these two, you're sort of armed with the sensitivity in the adjoint and then you can go ahead and differentiate this objective function and you can get through that. You can get an explicit characterization and you can then do that numerically, whatever you want. So I know that sounds scary, but I'm trying to convince you now that a lot of the theory behind this has been worked out and it's not that bad. Okay, so I'm going to talk about this example and really this example was illustrated, sort of motivated by some work that Mark Lewis did and actually she did a lot of the numerics, but also Frischoff is sitting here. This equation is actually, when you see this equation in just a moment, just say oh gosh, where did you get this equation? I think that Frischoff had a lot to do with formulating this carefully. So this is a problem about you're having some population and you're trying to model it in a river and you're trying to put some realistic dynamics in it. And so this is going to be a PD and it's going to have some how into this PD, this population moving in this river. It's going to have some unusual structure. And I would suggest that if you, of course you can look at this basic paper here, there's several authors sitting here, Frank and Mark, but take a look at that paper, it has better graphics than I have, because they had rivers with curvy structures, which I don't have yet, but anyway. So they had nice rivers with the structure of the rivers and they had sort of a ratchet effect that the species were trying to move up the river. Okay, so when we were thinking to do it actually in this paper, they did some control in the sense of a piecewise control, they were like putting in constant controls for the flow rate and using that to see how it affects the movement of the population of the river. So that's what we were trying to do. We're thinking about like a discharge, a water discharge flow to control the species, and we were going to use that control to keep the species mostly downstream and not get them to move upstream. That was the idea. And so here is the PD and you can see that it's quite unusual, so maybe today we don't have to worry about how in the world they got this structure, but you can see that first of all here is the diffusion term and it not only depends on a diffusion coefficient capital D, but it depends on a cross-sectional area of the river, and right here is our Q term, which is actually our control as the water discharge rate. So you have got sort of an unusual PD and you can see that there is a basic logistic growth hiding out here, but then this part is sort of what's going on in the river. And really you can, anyway, like I said, there are some justifications of this format, so this format can be a little unusual and you can assume things that this is not a problem. You can assume things about the boundedness of the derivative of the cross-sectional area, you can assume the cross-sectional area is never getting close to zero, things like that, so it makes it reasonably nice. And so what we have here is we have these boundary conditions. So we have a zero boundary condition at one end upstream and we have a no-flex boundary stream at the other end. So we're thinking about the, and so when you see my pictures this might be confusing because I'm going to have one end being upstream and one end being the downstream. And then this is the problem I'm trying to do. I'm trying to adjust this Q to, and so let me show you the goal. So the goal here is to, we're doing a minimization problem, and you want to, you don't want to just minimize the population, you want to keep them in a certain part of the river. So basically I think about that I'm going to put a weight in there and the weight's going to force the population to stay away a certain part of the river. So basically it's like going to penalize, it's a big weight near the part of the river that we want them not to go towards. So in this case in my notation it's near x equals zero, and this is the same sort of notation in Mark using anything, but the point is that's like keeping the population, keeping them down there. And this is actually a penalization function. Oops, I see that. Okay, okay. It's already gone out of there. Is there any? I don't know. There's no green. Anyway, this is the penalization term, and this of course makes things a little bit smooth, but it's not too smooth here. I mean it's not a big event. Actually, yes. It's totally gone out of there. Anyway, that's right. It's weird. I'm sure you didn't get the batteries full. Anyway, so I'm not really going to go through all the details today, but again I'm telling you that this is doable. You can do this control to state map, basically get this linearized PDE, find the airdrop PDE, and then you can actually do this characterization of the optimal control. And then I'm going to do some numerical simulations, but basically I'm just going to show you results here, not worry about PDE justifications or anything. And so this is just to tell you what we have when I'm going to keep this type of initial conditions. I'm only showing like certain cases today because it's a very brief talk, right? So I'm keeping the population, I'm starting with the population mostly at one end of this river, and then this is my wake function that's weighted toward the other end. And I've run it for different conditions, but just to show you that again. I didn't show this at some point. I just wanted to point out that. Okay, this is what the add-on looks like here. Hopefully you can see that it has zero at the final time conditions. It also has zero, anyway. And then this is a zero final time condition, zero position, or an x is equal to zero. And then this is the corresponding boundary condition at one end of the river. So the point is that this comes out because on that end of the river, you know, you think about you had to do some integration of parts and you got to make all the right terms. So that's the boundary condition. And this is my characterization. And you can see that the optimal Q depends on your adjoint on the first derivative of your population and the cross-sectional area. And then I put lower bounds and upper bounds. Now, maybe we won't worry too much about this, but there's my pictures of those. And then this is just to give you a rough picture. I'm going to show some better pictures now. But this is like saying what happens if you didn't do any control and this is trying to show a very, for a short time, you know, what's going to happen. And you start to think about that this is hard for you to see, but this is going up much higher than this one. This is the optimal control. This is a very simple case. I have a constant cross-sectional area. I'll show some other cases now. This is my family term in the Q squared. It's pretty small. And we just give you an idea of what that looks like. And that's sort of hard to judge like that. So I have tried to make better graphics here to show. There's something in this. Very sensitive to science. Okay, so let me just take you to show you this picture. So I tried to draw a way to illustrate how far the population went down the river. So this is basically trying to look at the movement of this population. And so you can see that if you go along this no control line, which is this one here, it goes pretty far. And this is part of this basic set. And you have to say, well, how did you decide where to do this? So I just took some detection levels. Say for the simple example, if I could detect it at that time, if the population was greater than a half, then that's the idea. So this is what happens. So basically it's sort of like you think about this. The population starts up there at eight and it's moving and it gets here later. And so then you have these two cases up here. And the two cases say, well, what if you just did constant control? What would happen? And you see that's the blue line. And then the sort of picky line between is the optimal control. And of course you want to say, well, why not just do optimal control and later I try to show that if you compare the cost, you can see that it makes a bit different. You can also do it like for different times. And you can see you can do it for a total time of five or a total time of 15. You can see that in the non-control case, the population gets much further along the river. And then... So I just want to show you just a little bit about this. So basically this is my base case that I showed first and you say, well, why won't you just do constant control? Well, actually you can improve and so basically if you compare, you can do better if you do the optimal control. And you say, okay, what about caring capacities and all those? You can change caring capacities. You can change the basic growth rates and you can change different things here. And just to say what do the controls look like and these are some things about the controls. When you see how they look, then one graph does the varying the diffusion coefficient, one does the varying time. And it just gives you an idea of what you can do. But when you start to think about this, you might say, oh, how realistic is it to do these things? And maybe you also want to do it seasonally or what do you want to do? And I think that's one thing that I've learned over the years is that I might do optimal control and it gives me some idea about a control that would be good to use. But maybe it's not feasible and I need to approximate it. So let me show you that. So before I do that, I'm going to show you that I also did cases with cross-sectional area not being constant. You can do a bunch of other things. And so just to show you a little bit here, you can see that you can have it varies. In fact, you might say, well, these pictures look pretty similar. But actually, if you take the J-values for these two pictures with this one here, it actually has a more cross-sectional area. You get a better, bigger J-value because there's going to be more population moving in. And so I just approximated one of these just to give you an idea. So this was the baseline case that I had that's here. And then this blue is the approximation. And so you work on this, you can say, well, that's easier to implement, yes. And you say, is it much different in this particular one? It's only 6% off. So it's pretty okay. And I think that's something to realize sometimes that you sometimes the optimal control has too much variation and you're really one approximated. So this is just showing you the idea that what I could do with this. And we are still thinking about this a bit more. But the idea is perhaps we need a more realistic cross-sectional area. We need to find a good set of data to try this with. And again, you can restrict your flow in certain times of years or however you want to do it. And then I want to illustrate another case with a little bit more connected to that. So this is the case of the black C-antibular. And so if you want to think about this, so really in recent years, we want to have time to tell the whole history about anchovies, but a little bit to know that anchovies used to be harvested all over the Baxi, but mostly the anchovies in the northern coast of the Baxi has collapsed. And so basically they're only mostly harvesting anchovies along the coast of Turkey there. And so the idea here is to look at that. And we want to think about, again, sometimes when you look at the history, sometimes what's happened is that there's been some predators that came in, and the anchovies in particular are certain types of jellyfish. So some years that's when the harvest fell a lot because the predator or the jellyfish attacked a lot of the anchovies. So that idea was to build a model of this system in the Baxi and try to put a food web, a simple food web. Because I mean, I've done a lot of fishery models and last time I'm just doing a single equation, which is not too realistic. So we're trying to do a food web here and we try to take it relatively simple. So we're thinking that this is sort of at least closer to ecosystem-based fishery models, and we're looking again at a simple food chain model to see the effects of the fishery on the Baxi and also the predator and the ocean ships. And then we are again going to try to harvest and do the discounted net value. So we'll talk a bit about this, but I had the data for many years of the landing, also have data about the number of fleets that we use and all this, and we try to use all of that to work on this. So this is the diagram of our model. So we have anchovy, we have zooplankton, we have a certain type of jellyfish. And then we know that actually there's other jellyfish that consume that particular jellyfish. So we actually put that one in a simple way, not in an equation. So we have the three equations here. So we have anchovies for a zooplankton, Brazilian jellyfish. And those arrows there are trying to show who consumes who. And we're, you know, that the anchovies consume zooplankton, the jellyfish consume anchovies, and the jellyfish also consume zooplankton. And so this is an ordinary differential equation with logistic growth in each compartment. And then just to show in the blue there, the term here with the H, that's our harvest term. You see that I have quite a few interaction terms. And you see that I have this funny one, this one that's called M6 here. That's the extirpidation because we know that there's extirpidation always happening on the jellyfish, the type of jellyfish that actually consume the anchovies. So we tried our best to use our data to estimate quite a few parameters in here. And so first of all, we know that this is a seasonal fishery. So when we built this, we built sometimes a year that you're going to, so this is an interval, talking about when you're going to do the season. And this is like a specific kind of year. And that then we are going to do it over several years until our set we're doing the actions on is called omega. And it's the intervals we're doing those actions on. And so we're only harvesting on those times. And the way this works is the first thing is that we're having a yield. So we're harvesting a portion of the population, that's our yield. And then we put a cost in here. And we made some estimates about what we thought would cost and what there's worth. And again, we're trying to again think about maximizing because you want to think about this as like a basically representation of profit because it's like the yield, which is like the revenue. And you could say, well, why didn't you put a price in there? You could put a price, but you could factor out the price too. So basically, it's like a representation of revenue minus the cost. And this is giving you the profit. And so we're trying to maximize profit and we use that to control us to do that. And this is to say that this is the way if you use point-wise-mindsome principle, you have the integrand up here of your objective function. You have these three adjoins with the lambdas. And they're attaching the right-hand side of the differential equations to the Hamiltonian. And then you have a system of this type. This comes directly from point-wise-mindsome principle of how to find the adjoins. And they have final time conditions. And again, this is the characterization for this particular problem. And that comes from differentiating with the objective, the Hamiltonian with respect to the control. And so just to mention this, we have certain years that we're reusing. We have the lambdas and the fleece data. We have certain years we were using. And we estimated the problem with the problem. So first, we had to sort of estimate how much harvesting were they doing. And we first did constant one, but later we realized maybe they were changing it every year. And then, of course, we did an optimal control one. And then we had to estimate some parameters. And we just list some of the parameters that we estimated to the best we could to estimate those off the data. And so this is the optimal control case. This is what the population is doing. And so you see that the population, and this is in tons, so it's units or tons. But the idea is this is the blue is the anchovies. And then you see over here, we have the jellyfish and red and the zoophilic. So this is actually the picture of what the solutions, the state solutions look like, the populations look like, if you do an optimal control. And if we had time, we were just talking about this, we could show more cases, but just give you an idea. So for the set of parameters, this is what happens. And just to show you a slice of it, like what happens like you were going to talk about these three particular years. This is what happens. This is what the mathematics says you should do in numerics, that you should do that optimal control. But we decided that that's probably not realistic. And so basically what we did is we approximated it. And so here's the approximation. So instead of like doing this, we just approximate that first piece and then let it go up. And so that's probably not realistic. So the point is that you start off a bit slower to harvest. The fish is going to be able to reproduce and that's going to be worth better. And so, of course, again, we had to think about what was the maximum level that made sense for the population. We did not want the population to crash and we had some ideas about that. And so you might say, how good is this approximation? And so if you do this approximation, it's only 3% less than the other one. So it just gives you an idea of how you can do that with an almost-input. And then we also had data about the fleets and we could actually estimate that also. And one more thing we did was we started thinking about this. We said, well, what if we do the same thing but just use one equation? And we did that. We did this and we fit separately those parameters and did one equation. So we tried to do optimal control without the framework. And this is what happens and we maybe can tell you more about this. Basically, you get a certain, so over here the net profit, this one over there with the 209, that's the food chain optimal profit. If you use the Android with one, it looks like you're going to get more profit. But in reality, it's not really true because that's not what's happening to the fish. The fish are being consumed by the jellyfish. And you said, well, what if you took the control you got from your simple case and stuck in the food chain case, you end up doing worse. So anyway, the point is that if you use a food chain, you're going to get more realistic results. So again, we're just talking about the whole thing taking, doing the food web and also the point that we use the food web to get more reliable management. And also if you have something that has too much variation in your control, you could approximate it. And then I just want to say some things that we might discuss and maybe part of this could be in the discussion later this afternoon is that, you know, first of all, I talked about the approximations, but also the point is that you might not always want to do continuous controls. You might want to do impulse actions. You might want to just like, you know, vaccinate the cattle half, you know, one time a year, bring them together or something like that. But also there's people that are studying things like viability modeling and that's actually uses a lot of state constraints trying to keep your population away from zero. And then of course there's people doing better economics and adaptive management. That's, you know, where you would not, you would try to update your choices sometime when you get more information in the middle. And there's different people that have worked on that. There's a group at Nimbus that's working on federalism and they're trying to use some of those. And we're also actually doing the things about learning. That means if you implement a control, you might not know exactly how it's going to affect the system and maybe you have to learn about it. But that just gives some ideas and thank you very much. Okay, so if you use the optimal control, you find an optimal solution for harvesting or whatever. But sometimes there is uncertainty in terms of parameters, in terms of everything. And if you don't know the exact parameters or exact functions, how can you influence your optimal control? That's a good question. In this case, we felt we got a pretty good fit to the landing data, the data we had. So you're right. But you're right. In general, you know, a system like this, it's sort of well behaved, a nice OD system. It's going to be pretty robust to some changes. So I don't think that would be a problem. Usually, you know, prime changes within some, you know, reasonable percentage will cost too much. My question is related to the choice of the functional form. Okay, in physics, I mean, to use a convex function, a corrected function is natural because that is related to energy. I mean, do you have an interpretation in terms of ecological terms of the choice that you have made? Because in both models, you have a quadratic term. Yeah, I do have a quadratic term. And that, I mean, a lot of times costs do have some non-moniarities. And I usually try to make that not too large. I mean, I have done problems that don't have the quadratic yet, and I'm just not showing those today. I mean, you can certainly do that. Right. But yeah, so, I mean, it's first-person efficient models, people believe there's some non-moniarities in the cost. So, you know, but there, you know, the interpretation here is really this one is about making profit. Yeah. Okay. Yeah, so good. In the first part of your talk, in case you have a controlled factor, whatever, depending on time, like what the discharge is in that case. So the application of the idea of optimal controls seems to imply that you have to have sufficient knowledge about what's going on. So, the transition seems to imply that you have to have sufficient knowledge about what's going on, about their current state or their names. Well, I think that's true. I think if you don't understand this as well, it's hard to do optimal control on it. You're assuming that you understand the system without control and you assume that somehow you have an idea what your control action is going to do to the system. But that's why you might later want to do adaptive management when you find out that you did it. You did some actions for a short time in the states or some observations or not, which you expected, then you would need to adjust. You're absolutely. So that's also the idea about trying to learn about how the controls actually affect your system. So the first part of my question, the second part is that if you need to have this information, you have to maybe collect some data or do some sort of monitoring or whatever. And then they start. And then it takes another time to make decisions, which seems to imply that the model should have, in the realistic context, some sort of delay. And delay usually, of course, destabilizes the system. Well, that's a good question. If you think your system without control has delay, then maybe your problem would have. But I don't think you have. I mean, basically, you know, I mean, I've had problems where I'm actually controlling in real time. Like I've done things with bioreactors where I actually have a bioreactor gaging. We usually have to put on that machine. But here I think you're giving a rule of thumb in some sense, some sort of rule of thumb to what they might do on the Turkish coast, right? That they might think about doing a lower level of fishing in the first half of the season to get it up. I was just thinking that in case of this sort of uncertainty when there may be delay, that they will be included, maybe their constant control could be finally more efficient than optimal control. Because optimal control may be destabilized by this potential delay. Do you think it's kind of? Well, it's a complicated thing because, first of all, I'm not usually running. I mean, you can do optimal control problems out to infinity, but I'm usually not. Because most of the time when somebody is managing a system, they're only managing it for a short time. And so the de-stabilization thing, I don't know how fast it's going to occur. But I mean, I have done some control problems with delays in the system. And that's doable. And I didn't have a problem with de-stabilization. But I mean, I'm sure it depends on different systems. Yeah. Yeah, I may miss something. Did you take into account the inter-annual variability in any of the? Well, we had the data which has some variability in it, and we did the best we could to fit it. Right. So I mean, you're right. I mean, maybe this is a relative simple model, but for the fish anyway, we had that. And they didn't obviously catch the same amount every year. I mean, I could have put the data up there. But yeah. But in conditions like the effect of fish populations, climate? Well, yeah, I mean, that's an interesting question, which you're right. So this is like a rule of thumb. You're absolutely right. So I'm actually building, not in this one, but I'm building in temperature and precipitation and things like that as environmental components that actually need it. But I haven't done that on this one. But you're right. I mean, that's why I'm calling it a rule of thumb. It gives you a rough idea of what you should do. Yeah. Not like you know exactly what you should do. I think we're back back down. Yeah. So with your single species, optimal control. Yeah. And you're going to get into the food chain. You do worse. And I think worse meant the profits. No, you do better. You do better, but it's unrealistic because that's not what's really happening. It looks like if you just use one equation, you're going to overpredict what's going to happen because you're going to get more profit because nothing's pulling the fish down besides the harvest. There's no predators out there pulling a brain. So I thought what you did was you took the control that you would get from the single species and then put it in the control species. Well, I did suppose that was right. So if this, the middle one there is if you just use one equation and use its optimal control. Right. It looks like if you do that. But if you just do the, you took the control from the anchovy single equation one and put it in the food chain, it does worse. You do worse. Okay. So your profits gone down. So then the question becomes how much do you need to include in the food chain? So can you do that with a, you know, take a minimal food chain with adaptive management. And know that you're getting towards. Well, that's an interesting question because this looks like a pretty minimal food chain right here. Right. Because I just have, yeah, how minimal could I get? That's a good question. But I haven't thought about that. Anyway, this is an interesting idea. Yeah. Yes. Yeah. Is that you pointed out that you can kind of look at it, the optimal control generates something that you think is in some sense. Recently, how hard would it be to put strength on and say you want to say something, for example, piecewise linear, which is what it looked like that you probably are viable. Yeah, that's a little bit more tricky because differentiation in a piecewise linear space is tricky, right? Yeah. So, but you can easily, yeah, but I mean, it's easy to approximate. I think I might, for example, I did this one with a particular biorector that we built this biorector at university and we were controlling the way the bacteria was degrading a hazardous chemical and we got this, you know, nice, curvy thing, but this, you couldn't turn that machine to do that. But we could look at this and then approximate it and, you know, see how, so we could do it on that machine, right? The approximation. But yeah, so this thing, I think that the machinery of doing like you're differentiating things with respect to functions. I mean, the functions could be piecewise, but you want to have a space in which you can differentiate, right? So, and then you can approximate, but yeah. Yes. Yeah. Mark, yes. Okay. So, yeah, so for the water release scenario, it's kind of interesting to have. And is there a constraint on the total amount of water released? There is not yet. I mean, you can do that. You can. Because like when you have a dam, that's often the case. Well, that's an interesting question. Yeah, that's another. Yeah. I mean, I did not put that. I mean, again, I have some other problems which I was discussing with Chris yesterday about resource allocation where I only have so much resources to allocate. And then I can put that on. I haven't put it in this one. That's sort of this is. This is doable. Yes. Yes. Yeah. Yeah. This one is tricky though because of the equation is like already tricky. But yeah. But but yeah, you can put on that type of thing. I think also like you could that that's sort of like you can either like restrict the amount of resource or restrict the amount of money you have to spend. I think that's a realistic thing you might need to do many times. But I didn't do it. I can think of a biology. Right. Yeah. Like I was doing this one with resource allocation. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah.
|
Some basic ideas behind optimal control of ODEs and PDEs will be introduced. Control of a flow rate in a PDE model for a population in a river will be illustrated. Harvesting in a system of ODEs representing an anchovy ecosystem in the Black Sea will be discussed. Issues and new features of control approaches will be presented at the end and could lead to discussions later at this workshop.
|
10.5446/57608 (DOI)
|
So I'm going to talk about some recent work we've been doing trying to improve inference for nonparametric approaches to ecological dynamics. So everything that I'm really interested in doing is trying to build predictive ecosystem models so that we can do ecosystem management. But I think that there are some fundamental uncertainties in ecosystem modeling. First of all, at least in the systems I work in, we're never going to have a complete list of the species that are important in the system. And even if we did, we wouldn't know all of the traits that contribute to interactions among those species or the other state variables. That's a pretty strong statement. I'm sure somebody's going to tell me I'm wrong. But that's my perspective on this. And in addition to that, it's really hard to quantify things like non-consumptive interactions or context-dependent behaviors that the colleges to go out and look at things can tell you are actually happening and probably affect the dynamics, but are really pretty hard to put into ecosystem models. So if we're going to try and make predictions for ecosystems that we use in doing ecosystem management, we need to confront this issue of incomplete state spaces and uncertain model structure. All right. So as far as I know, there are about three ways that I'm aware of to handle these unobserved variables, the rest of the system that we're not really sure exists or that we're not paying attention to at the time. The first is that we can try to account for those unobserved state variables using state space or hidden markup approaches. And that works pretty well if you already know how the system works. Right. And so the other way that we could do this is what I think we mostly do, which is to just treat those unobserved things as noise and just focus on the variables that we already know are important and then just kind of not pay attention to the rest of them. And the third thing that we could do is to try and account for those missing state variables when we go to make predictions, at least make account for them implicitly using lags. And so I'll come back to that in a minute. So I'm going to focus mostly on these two approaches for the first couple of minutes of the talk. And I want to have a way of comparing those two different ideas. So if we think about the world as divided into the variables we care about, we'll call those the observed variables, and the other variables that are probably important in the dynamics but we're not paying attention to, we'll call those the unobserved variables. They each have their own dynamics that interact. And we can think about writing down a model that's just in terms of the observed variables as effectively an approximation to the full dynamics where we've fixed the values for the unobserved states at some constant value. Let's pretend that it's the mean. If we do that, then we can think of the quality of this approximation as determined by the variance in those unobserved variables and how sensitive our observed variables are to the changes in those. That's pretty obvious. And this is just a first order linear expansion. So I'm sure that somebody can tell me how to do this better. But I'm a fish biologist. So I'm just going to keep going with this because that's what I know how to do. So whenever the variance in those unobserved states is small, if they don't change very much over the time scales that we're interested in, this approximation is probably good, particularly if the variables that we care about are relatively insensitive to changes in those. OK. So that's one way to think about this. The other way to think about this is to try to, when we go to make predictions, is to try to implicitly account for those things that are missing using time lags of the things that we are paying attention to. The motivation for this is Taken's theorem of time delay embedding. And if we work in ecology and you pay any attention to Sugehera's work on empirical dynamic modeling, you've probably gotten to see a cool video like this, which shows the attractor being traced out in a native coordinate space, so x, y, and z, compared to the attractor being traced out in the delay coordinate space. That's exit time t next time and three steps into the future. And so Taken's theorem says that there's a one-to-one correspondence provided that between these two attractors provided that we had enough lags and the spacing and time is sufficient. But Taken's theorem, the proof of that requires some topological approaches to dynamics and fun words like diffec-or-phosum, which my fish school never came up, never told me about. And so thinking about how to generalize this idea or how to compare it to the previous idea is actually pretty hard. So I wanted a more sort of organic, intuitive approach to thinking about this. So here's what I came up with. If we think about the observed dynamic, the unobserved variables, and we push them back one step in time, and then we plug the result there into our model for the observed variables, well, we haven't really accomplished anything. We just sort of push the problem back a time step. But if it happens that we can determine the values of the unobserved variables at the previous time step given information on where we were then and where we've moved to, if we can actually do that, then there's a function that governs those two things, we could plug that in, and now we have a model that only depends on the observed state. Now, if we can't do that, which is pretty likely on the first pass, we could repeat this argument. We could shift back two steps. We could shift back three steps if we need to. We could keep going, obviously, we could keep going forever. But the point is that if given a long enough history of the observed variables, we can approximate the past values of the unobserved states. And Teckens' theorem effectively says that for a generic deterministic system that this approximation can be made exact. For a stochastic system, obviously, that's not going to be true. But we could make use of the conditional expectation for those, for the past values of the unobserved variables given the subsequent history of the observables. And if we do that, this leads us to a model for just the observables in delay coordinates. And we can think about the quality of that approximation using the same sort of first-order linear expansion. And if we do that, we can think about we have our model in delay coordinates. The error in that approximation is given by the product of these sub-majorities of the Jacobians. And what this shows us is that the quality of this approximation depends on the variance in the unobserved variables conditional on the subsequent states of the observed variables. So if those observed variables tell us anything at all about the past states of the unobserved variables, this variance will be less than the unconditional variance for those. And so as long as the system isn't too sensitive, the system of the observed variables isn't too sensitive to changes in the past values of the unobserved variables, this approximation ought to produce a lower approximation error than just ignoring those. So that's unobserved variables. Certainly there are more conditions we could put on that statement. And I'd love to hear about more rigorous ways to approach this. But that's my way of thinking about it. So unfortunately, in general, a way to write down this delay coordinate map is not, there's no recipe for doing that when you don't already know the complete dynamics of the system. Obviously, if you knew the complete dynamics, you could do it. And so typically, we have to estimate that delay coordinate map from some data. And the workhorse in all of my work for doing this is Gaussian process regression, which is a Bayesian nonparametric approach to function approximation. So we can think about there being an unknown function that in the standard Bayesian context, I would like to assign a prior probability distribution to, observe some data, and update the distribution of my space of unknown functions. So we need a prior on spaces of random functions. The most convenient to use is the Gaussian process, which is just a continuous generalization of the multivariate normal distribution. And as such, it's governed by a mean function and a covariance function. So in all of the stuff I'm going to show, the mean function is set to zero. So that's the black line over here. And the covariance function, there are lots of possible ways to do this. But the covariance function that I'm using for most of this stuff is a tensor product of squared exponential progrances, where each input variable gets its own length scale parameter. I'll come back to that in a second. For now, though, I wanted to show a little video of the updating of this as we add some data. So I just made this yesterday. It goes by a little too fast. So maybe I can step through it. So when we add a few data points, this is what we get. As we add a little bit more data, this is what we get. And the point is that even if we have a relatively small amounts of fairly noisy data, we're able to sort of concentrate the probability around a reasonable estimate of the shape of the function, assuming that we have data over the range that we know. So that's in 1D, though. Now all the time delay embedding stuff requires that we have gasing processes with multiple inputs. And so coming back to this length scale parameter, the length scale parameter is effectively governed how flexible the function is in the direction of each input. And so when we set the length scale parameter to 0, that lag effectively, that input, effectively drops out of the model. And so we can encourage sparsity in the models that we're estimating from data by setting a prior on the length scale parameters with a mode at 0. So in the absence of any information in the data, the model will converge to a 0 for the length scale parameter in that direction. So applying that to this little example here, what happens is there actually isn't any variation in the function in the direction of lag 2. And so that just drops out of the model. This is probably the most important distinction in what I'm doing and what the previous history of time delay embedding in ecology is doing. This allows us to do automatic selection of relevant lags without specifying ahead of time and embedding dimension in a time separation between lags. And that allows us to have much more flexible descriptions across different time scales. Plus, because we're doing this in a sort of Bayesian regression context, we can easily and coherently include additional drives other than just the lags of the variables that we're looking at. And we'll come back to that in a minute. So in all the work that I've polished on this so far, we've shown that this delay embedding approach typically provides better predictions and gives us better control in incomplete state spaces than the process noise approximation. The most recent of these is a meta-analysis we did of 185 different fish populations where we showed that for 90% of the populations we looked at, this delay embedding, the prediction error in our delay embedding approach is less than the prediction error for the alternative model that's typically used in fisheries. And I did this over four different fisheries models. It's always the same picture. And on average, the error reduction, the average in the predictions is 25% less than the standard fisheries model. In some cases, it's really quite a bit less. OK, so better predictions not surprisingly lead to more robust control policies that are less prone to collapse and produce greater long-term yields. Now for those of you who are really unhappy with this black box approach to dynamics and have a particular model structure in mind, the other thing that we can do with this framework is that we could use that particular model structure as the mean function for our Gaussian process. And then when we go to update with data, we can use that as a tool for identifying and correcting for model mis-specification. But these are all things that are already in print. And I wanted to use this opportunity to tell you about the brand new things that we're working on. And so I'm happy to talk more about those if anybody has any questions. But I'm going to instead focus now on the work we've been doing trying to integrate data from multiple locations and to explicitly account for observation dollars. So, sorry. It's going really fast. It really drives me out. So let's see. It's often the case that we have relatively short time series in ecology. But for any particular species, it's also often the case that we have those time series for the same species from several different locations. And so an obvious thing to try would be to try to incorporate information from multiple different locations. And so I'm going to tell you in particular about a project done by Tony Rogers who assembled a database of blue crab population time series going from Massachusetts down to Florida. And so here's a plot of the crab abundance time series. What a mess, right? And so, you know, and if we take each pair of these time series and we plot and calculate the correlation coefficients, we find that, you know, here's the resulting correlation matrix. The average correlation is less than 0.2. And so it really doesn't seem, and there's no obvious spatial structure. So what is going on? So we have the same critter in very similar habitats, right? So that ostensibly should be, you know, producing very similar dynamics. But they are totally uncorrelated. So what's the story? Are the dynamics actually really different? Is it just noise? And so we wanted to ask whether these uncorrelated time series are actually generated by similar dynamics. And to get at that, we developed a hierarchical approach to the Gaussian process delay embedding where we say now that the dynamics inside I are given by our delay embedding maps. So that's a function of the past crab abundances, which we've extended the map to include past values of temperature and precipitation, which based on previous studies of the crabs we know are likely to affect the dynamics. Now to make this hierarchical, what we need to do is share information across different sites, right? And so the way to do that is to say that the mean function also follows a Gaussian process and then the site specific delay embedding maps can be thought of as deviations from this common mean function. If we do that, then the total covariance is the sum of the shared covariance and the site specific covariance. And if we say that the shared covariance is some constant fraction of the total, then this one parameter, this row D, describes the correlation in the delay embedding maps across two locations. So this gives us a way of measuring the similarity in dynamics without having to specify a particular model structure and irrespective of any temporal correlation in the data. So if we do that, here's the correlation matrix that we end up with. This is the one that I showed you before. This is the correlation. This is the matrix of dynamic correlations. On average, the correlation is quite a bit higher. It's something around 0.8 across all the sites on average. And now there's evidence of clear spatial structure, where the dynamics are more similar across nearby sites and less similar across sites that are farther apart, which is intuitively very reasonable and not something that we built into the model. Overall this model explains about 50% of the variation in those really noisy, totally uncorrelated dynamics. And so what this says is that what this suggests is that those totally uncorrelated time series are actually the result of very similar dynamics across the range of the crabs. I'd be happy to talk more about this, but I kind of want to move on to a couple of other things. So next thing. And if we think about this hierarchical approach, we're still actually just saying that the dynamics in site I is a function of the previous abundances in site I. That is, we're only using local information. And you don't have to be super creative to wonder, well, couldn't I make things better by using information from the neighboring sites? So we could think about there being spatial delay coordinates. But a moment's reflection suggests that this actually is sort of problematic, because Tekken's theorem says that I could reconstruct the complete dynamics for location I using just lags of location I. So if I can do that with just one location, and I can do it with several locations, clearly there are many different combinations of lags that can get me to the same answer, which means that there should be some identifiability problem if I just naively throw in the additional lags. Now, in a Bayesian context, we can try to deal with identifiability problems through prior specification. If you remember, in the Gaussian process, the length scale parameters govern how much the delay in betting back can change with each input. It effectively controls the relevance of each input variable. So a reasonable thing to say in the prior is that we expect the relevance of the previous crab abundance to decay with time and with space. So we expect the local information constantly in the last time lag to be more important than in previous lags, and then we expect the nearby sites to be more important than sites that are far away. The specific functional form we're using to add this structure to the prior on the length scale is something that PhD student Bethany Johnson came up with. So the question that we want to know in this framework is first, does adding information from neighboring sites increase our ability to make predictions? And the second thing is just adding this spatial structuring to the prior on the length scale parameters actually improve our ability to make predictions. So to get at that question, Bethany did a little simulation study using a two species coupled map lattice where the local dynamics are chaotic, the dispersal is nearest neighbor, and there are only nine sites with a periodic boundary. She then simulates 19 time points for this, which we're going to use to train the Gaussian process models for and repeated for several different migration rates. Now the goal is to find out which of these various flavors of delay and betting models produces the best predictions. So she did this for delay and betting models using only the local delay coordinates, using spatial delay coordinates with no structure on the length scale parameters, and then also using the structure prior for the length scale parameters. Okay, so here's how that turned out. The vertical axis is the root mean square error, and the horizontal axis is how many steps into the future we're trying to predict. So this are our pure out of sample predictions the model never saw any of the data it's trying to predict. The red line are the results using only the local delay coordinates. The purple line are the results using the naive just adding in data from the other locations. And the yellow is what we get when we add the structure to the prior on the length scale parameters. There are two things that are important for this. The first is that just adding the additional information from the neighboring sites doesn't actually necessarily make the predictions any better. In fact, it can make them worse. That is, adding additional inputs when we're thinking about out of sample prediction, adding additional inputs doesn't necessarily make things better. The second thing is that the model that uses the prior that has structure on the length scale parameters is better in both cases. It's typically at least as good if not better than all the other things that we try. And so that gives us a way of extending the forecast horizon or the time over which we can make useful predictions. Now I think this is pretty promising. It's still pretty early on. But our next step is to try and apply this to some real ecological time series. And so if any of you know of any data sets where this might be worth trying, please let me know because we're looking. And maybe you know of some. OK. Last thing, observation noise, which I have in everything I've shown you so far completely ignored. Now why am I choosing this time to actually start paying attention to it? Well, I ran a little workshop last fall where I brought in fisheries people from all over the US and the head to talk about these methods and the head of stock assessment for the National Marine Fisheries Service said effectively that he was never going to pay any attention to this stuff until we actually dealt with observation noise. OK. So we have to deal with observation noise. Fair enough. So we've recently come up with a way of dealing with this through a state space approach implemented through an EM algorithm. So here's the model. Just like before, we have our process model is the next state of the system is some unknown function of the previous state system. And on top of that, we now have an observation model, which says that those states are observed with some additive noise. Certainly, we can try other things, but this is the setup that I've got so far. Now rather than doing the full Gaussian process delay embedding, instead, I'm going to use local linear approximation to the dynamics, which turns this problem into, for the state dynamics, turns this problem into just a local multiple regression problem. So there's an intercept and a set of lag specific slope parameters. And those parameters are allowed to change as we move through the state space. All right. So with that setup, the likelihood for this pair of models is given by the product of the likelihood for the observation model and the likelihood for the process model. And from this likelihood, we'd like to choose the time series of states and the collection of regression parameters to maximize the likelihood. Now the way I named this work was to implement an EM algorithm approach. And the way this goes is step zero is to make a guess at what the states are. And my initialization is to just say that the unknown states are equal to the observed data. Oh, there's no way I'm stopping now. I'm just a couple of seconds. You've got time, but you have to leave time for questions. Okay. So step zero, set x equal to y. y is not the unobserved variables anymore. Now this is the data, right? I made a few notation changes. Sorry. Okay. Step one is calculate the, do the local linear regression given the current estimate of the states. Step two is to move over and say, okay, what is the expected value of the states given the original data and our collection of local regression parameters? Then take those estimates of the states, go back, redo the local regression, go back and forth, and iterate until this process converges. Once the steps in both cases are totally algebraic, this thing is really easy to compute. There's no MCMC nonsense or anything like that. So it goes down. And so it converges in a second or two on my laptop. I should point out that this is a totally analogous to a Kalman filter where the dynamics are governed by this delay embedding. It's effectively a states-based version of sugehara's S map. So that's kind of like an SSS map, right? Anyway. Okay. So to motivate why I might want to do things this way, here is a simulated, some simulated data from a three-species system. Again, I only have data for species one and it's really noisy. Okay. The rate, it's something like 40% observation noise. So if I plot this noisy data in delay coordinates, this is what it looks like. Now there's clearly some structure there, there's some shape, but I wouldn't call that an attractor, right? It looks more like something a cat produced. Okay. So now we're going to run our EM algorithm on this and see what comes out. And when we do that, this is what happens. So as we vary the signal-to-noise ratio in the EM algorithm, we go from, we go from, well, not the cat, this. And that's not bad considering that the true attractor, which I didn't know anything about ahead of time, looks like that. So it's not perfect, right? I mean, there are clearly spots that are screwed up. But considering that we started with data that looked like that, the fact that we can get this close is, you know, when that happened I was very excited. So what might we want to do with this sort of thing other than like make cleaner pictures? And really awesome videos, right? Well, the first thing that came to mind is that as everybody here knows, estimating Lyapunov exponents from short, noisy time series is notoriously difficult. So I wondered whether we could use this tool to try and make that any better. Okay. So, oh, I forgot. Yeah. I forgot to say, by the way, if we go back to the time series, this blue line is the filtered state estimates and the black line is the true states from which I generated the red, noisy data. So it's, like I said, it's not perfect, but it looks pretty good. Okay. So can we use this to try and improve our ability to estimate Lyapunov exponents from short, noisy time series? So to get at that question, I did a little simulation test using the discrete logistic model to generate some dynamics with some noisy observations where the range of noise goes from practically none to, you know, roughly 30%. I did this over a range of values for the growth rate parameters. So we had either a limit cycle or chaos. And then I, so I simulated the data. I fit the Gaussian process model using, not the Gaussian process, the EM algorithm version. Estimated Lyapunov exponents from the answer that I get there. And I compared the estimated Lyapunov exponents to the ones that I get using the same local linear regression approach, ignoring observation noise. I did this a bunch of times just to see how they answered there. Okay. So here's how that turns out. The four different panels correspond to four different values for the population growth rate. The vertical axis is the Lyapunov exponent. The horizontal axis is the amount of observation noise. The black line is the true Lyapunov exponent. And the blue patches indicate the distribution of estimates that we get out of this EM algorithm. And the red patches indicate the distribution of estimates we get when we ignore observation noise. Maybe the most important thing from this is that ignoring observation noise, even for modest amounts of observation error, leads to severely negatively biased estimates. That's not new to me. The other people have found that before. But it does suggest that if I were to naively go and apply this stuff to some ecological time series without controlling for the observation error, that I would almost certainly conclude that the dynamics are much more stable than they actually are. Okay. So on the other hand, though, these estimates that we get using this EM algorithm approach are much less biased overall. They're not perfect, but they're much less biased. And they're much less sensitive to the overall level of observation noise, at least over the levels of observation noise that I've looked at so far. And frankly, in ecological time series, yes, I know that I'm not getting exactly the right answer for the Lyapunov exponent. But I think that most of the time, the question is really whether it's positive or negative. And here, with appreciable noise, I only get the answer for the sign write 50% of the time, whereas doing this EM algorithm approach, it's more than 90% of the time write, regardless of what the noise is. Okay. So that's it. To sum up, I tried to make the case that if what we're interested in doing is making predictions, this delay embedding approximation produces better predictions leading to more robust management policies, that by integrating information across space, we can increase the forecast horizon, but we can also use these tools to quantify similarity in dynamics across space. And that this state space approach improves our ability to do attractive reconstruction and leads to less biased estimates in the Lyapunov exponents. This is all brand new. I actually just finished this Lyapunov exponent stuff like a week ago. So there's obviously lots more things to do. First of all, more challenging simulation tests than the discrete logistic. I'd like to extend this EM algorithm to multiple kinds of inputs. Right now you don't have one kind of input. And then, but the thing that I may be the most excited about is to try and apply this to all the ecological time series that I can get my hands on. So before I take any questions, I should say that many people have contributed to this work through the years, some of whom you might recognize, most of whom know how to smile when you say cheese, except for that guy. All right. Yeah. So questions? I have two comments and a question. For you, now I want to do this myself. Second, you absolutely have to apply this to an Arctic Quill because there's so much data and it's from different places and it's so important. If you don't do it, I will do it. I have to have you answer my question. And how long are you and what will you be fitting? So that's a good question. So almost all of the stuff that, all the stuff ignoring the observation noise is pretty robust. Having the prior on the length scale parameters does something else that I didn't actually mention, which is we can use an old result on Gaussian processes, on the stationary Gaussian processes on the unit interval that sets the expected number of wiggles over the unit interval to one on average. And so all of my stuff says uses that to regularize the shape that we're getting. So on average, ahead of seeing any data, I'm saying that I'm really mostly interested in things that have one harm. If there's a lot of information in the data that says it's going to be more than that, we'll get there. But I don't end up with things that are extra wiggly because of that. So I was really interested in, I think it was Bethany's work on using spatial information. And I would have expected before you showed your results that the schoolness of including spatial information, which I guess you would measure as the reduction of the v-square error, I would expect that to go up as migration goes up. But actually, it shows the opposite, right? Yeah, although it's not monotonic. I showed two values. We had like four different ones. And I wanted to focus on the specific which worked better. But as you go from 0.1 to 0.2 to 0.3 to 0.4, it doesn't look nice. So it's not the opposite, right? I'm going to explain it. It's just not that. Right. I don't know why it's not. So somebody who actually does spatial dynamics, this is my first try, would be better to ask them because I think it has a lot to do with how much the site synchronized, how much they don't. But we really haven't looked at it in any detail. Frankly, that's also brand new like last week. Yeah. Beth? I'm curious, besides the fact that MCMC is a big pain, why when you wanted to do observation noise, you got this lovely Gaussian hierarchical machine going? And is there a reason that trying to build a state-based model with observation noise in it in the original Gaussian process? Yeah. So that's it for the paper that I mentioned with Jim Thurston. And the amount of time that you wait for the answer and the quality of the answer that you get back, given that it takes forever for the MCMC to get to anything like you believe it converged, it's just way too long to wait around for answers that are, you know, at the end you're still like, is that right? I don't know. So I'm much happier doing this when the goal is to do simulation testing because I want to be able to do 10,000 different versions of this and I don't want it to take until I retire. I want it to finish in a day at most. So for a specific problem, if I had a specific data set where somebody could say, you know, this observation likelihood, that's just not reasonable. I say, okay, if all I need to do is construct an answer for one data set, I would do something more elaborate. But this is actually very efficient for asking just the fundamental question, can I do a state-based version of this delay in that stuff? Yeah. It would seem like the work you did with your student, you did with the blue crap, would also let us help very naturally to looking at spatial stock recruitment curves where there's a little bit of interactions between different river and river ranges. Yeah, that's a great idea. After we did the stock recruitment meta-analysis that I mentioned, I've always had it on my list of things to do to go back and do it again with the hierarchical model just to see if there's, you know, that we could extract any additional information or structure from it. Yeah. Thanks. Yeah. You mentioned it just correctly that the non-topological delay in betting works better than the one of the virial dynamical modeling. So I'm referring to the first part of your talk, right? So like just using your approach with the Gaussian, have you checked or have you compared? Oh, have your predictions? Come on. Yeah, you do. Okay, so long time collaborator George isn't here for you to say that this works better. Yeah, it does. So you can construct situations where the... Is that the important process? Crap. So you can construct situations where the S-map is going to do better and where the Gaussian process is going to do better. The important thing for the, say the recruitment meta-analysis that we did, we actually did that project with the S-map before and it doesn't turn out any better than using just the stock recruitment methods. The prediction error is more or less comparable to what we get doing a standard stock recruitment model. The fundamental difference is the automatic lag selection, right? That's by allowing for a mix of time scales and automatically detecting what the relevant inputs are. We actually are able to do things that are much more flexible than the S-map. So yeah, at least in that example, it works better. Okay. So for the Gaussian process, you're looking at that space with the prior on sort of the relevance of space. Would that... If I wanted to extend that to other correlated data, baby is in replicates in space, but there are some kind of correlated measures that I want to use to predict my main thing. Could I use this process if I just figured out a prior for how I would fit that into that system to the end of the delay? So I have multiple time series that are maybe measuring correlated measures. And I want to predict one of them. Could this kind of approach not just be used for space, it would just be the difficulty of doing it, right? Oh, sure. Yeah, yeah. The hierarchical thing, there's no explicit space in there. There's just the same. There's just the replicated measures. There's just the same. There seems to be space in your prior sort of watching, holding that information. Right. That's really cool. Yeah. So you can do it if you say that there is spatial structure, right? You can build that in on purpose. But this was sort of on... Because what I wanted to know was what does the spatial structure look like? We were agnostic about what that was. So it just did it pairwise. And so you could do that for any pairs or triplets of time series you wanted. One more? So I think... What are the data demands on this? How long time to use, etc. And how do you use the spatial to get around that? Yeah. So the generic answer to that is that you need to have several multiples of the intrinsic time scale for the system, whatever that means. Right. So I wonder if I had it. Let me... One second. I think I might have a slide from the meta-analysis in here that would be useful. Okay. So what happened there? Okay. Please turn on... Oh, what the... Okay. There you are. Okay. So to get a handle on this, how much data do you need? In that recruitment meta-analysis that I never actually told you anything about other than that we did that. We asked how much of the prediction error... We asked what the relationship between prediction error was against both the number of observations or several other things. But it turned out that it was the number of observations relative to that age of maturity that gave us a fairly linear decrease in the prediction error. Such that when we have say 10 times the age of maturity, we're actually able to predict roughly 50% of the variation. So these are yearly observations. Yeah. So this is annual, but for shorter-lived things, if we happen to have more data, that would be better. It's not just meaning that they did this years over years as your measures. Right. Yeah. The problem with the question of that is does people use this for tipping points? We've been a lot of time ago. Yeah. And it starts with a lot of data to do that, but in other words to decipher to your interactions and the changing of them. Yeah. I messed around a little bit with the question of whether we can do this type. I'm sorry. Okay, so I did mess around a little bit with whether.
|
Ecosystem-based approaches to management are desirable for many reasons. However quantitative approaches to ecosystem management are hampered by incomplete knowledge of the system state and uncertainty in the underlying dynamics. In principal, we can circumvent these difficulties using nonparametric approaches to model the uncertain dynamics and using time-delay embedding to implicit account for missing state variables. However, these methods are incredibly data-hungry and tend to be sensitive to observation noise. Here I propose to mitigate these practical obstacles a) by adopting a state-space perspective that allows us to partition observation and process uncertainty and b) by combining data from multiple locations using hierarchical and spatial modeling approaches. We find that noise reduction substantially improves attractor reconstruction and reduces bias in the estimation of Lyapunov exponents from noisy time series. Spatial-delay embedding significantly increases the time horizon over which useful predictions can be made compared to more traditional local embedding.
|
10.5446/57609 (DOI)
|
I have a couple of carts, so it's a couple of things. We've got something old and something new, as you might be able to tell from the number on the sub-cracks. This stuff is all either in collaboration with Steve Cantrell or with Steve, Mark, and Juan Lowe. And, uh, why not? I did put it back in. It did work, it's not going to go forward. It worked first. There's no reason for that thing to work. It worked first. The buttons work. Okay, so, the first case is Aphids and Ladybugs. So, those of you who have been in the business a long time might remember Aphids and Ladybugs after Mount St. Helm's, and it was an interesting natural experiment, which would be hard to repeat. So, this system has built in multiple scales of various kinds. So, the story is that after the mountain exploded, fireweed recolonized that area and formed patches. The Aphids colonized the patches, and eventually Ladybugs came back to eat the Aphids. And after the first summer, if I'm getting this right, you have the usual thing that larger patches wound up having larger Aphid densities. In later years, that was not true. This was a little bit of a mystery. And so, uh, Stephen and I tried to figure out what was going on. So, the story is that you've got Aphids living on patches. What the Ladybugs do, apparently, is fly around, look for a patch, and either look for patches or look for Aphids, and then they aggregate where they think they're going to find some Aphids. And so, that little cartoon kind of displaying the Ladybugs world as patching. The Aphids world is a patch. And so, you've also got intrinsically different spatial structure for the predator and the credit. So, the background question is, if you look at a classical diffusive model with, uh, logistic dynamics, the Dirichlet condition on the boundary, which means if an Aphid wanders out onto the lava, it's gone, which is plausible for this particular system, then you get an increase in population density with patch size. And eventually, it goes asymptotically to R, to K, rather, as the patch gets sufficiently large. So, that's the, that's the background question. So, continue from there, we go to lava. So, for the, for the system, we thought of having a large region with a bunch of patches in the region. The patches would, would each contain a population of Aphids. And Ladybugs would be flying around looking for Aphid steam. So, after a little while, the Ladybugs will settle. And so, the way the model works is for the Ladybugs, you have a total Ladybug population. And some of them are near, some of them are on patches. The, uh, is the, uh, dynamics progress. I'll show the Ladybug dynamics in a minute. But within each patch, the Aphids are being freighted upon by the Ladybugs. And, um, in the mass action term, we're taking number of Ladybugs over patch area to give the density. And, um, then we've got the number of Aphids, which will be some function of the spatial variable inside the patch. For the Ladybugs, we have an immigration, immigration model. We're thinking of this from the viewpoint of maybe MacArthur and Wilson. So, these are islands and the Aphids are on the islands and the Ladybugs are colonizing them or leaving them. So, we're looking at two different kinds of movement process. And, um, here's where the separation time scales go. Uh, this should actually be a, uh, the pregraph layer, that should be a town. Um, we're thinking of a fast time scale for Ladybugs because they're flying around the landing. They're just moving. And, slow scale because the Aphids are diffusing and they're reducing, uh, going through their several generations per summer. Um, and, um, so we get these two separated, uh, equations. Uh, what we do is then let the, the fast time scale, uh, run to infinity, give you equilibrium. And so we can actually write down, uh, I deleted the equilibrium, I deleted the stars, uh, in the calculation. But, with a little bit of algebra, you can actually figure out how many, uh, Ladybugs you should have on each patch. Uh, there will still be a residual population in the air, but we don't need to worry about that because we can, uh, get that once we know what the overall distribution on the patches is. So, the, the mechanics of the model are really in the choice of immigration rate. So, we, we thought that larger patches should attract more Ladybugs. Patches that had Aphids on them should cause Ladybugs to leave, uh, more slowly. So the immigration term is going up with patch area. That's the A, I, A of the fee. Uh, the immigration rate is going down with patch area and also possibly with Apha density. And, um, here's another simplifying assumption. We averaged the Apha density across the patches so that we got an expression that just depended on T. We didn't have to worry about the fact that we had something spatially varying inside the patch. We're just looking at the overall density. We're just looking at the approximation. Um, when we do that, we get possibly aggregation or, or if you don't have, uh, you don't want to think of aggregation. If it's just passive in terms of the geometry of the patches, maybe concentration. Uh, if you have higher prey densities on a patch, you'll get, uh, fewer Ladybugs leaving. And so that will concentrate the Ladybugs on that patch. And we can actually write down what the number of Ladybugs should be because we've got the equilibrium there. Okay, so now back to the Aphids on the slow time scale. Um, remember An is patch area. Vn is number of Ladybugs on patch. And so we're going to look at what the, uh, Aphid equation does. We substitute the equilibrium at the fast time scale for the, uh, Ladybugs into the Aphid equation. And we get the equation at the bottom. Now, if that doesn't look too wonderful, it's a little complicated. But there are a couple of important features. One is that the, the, the Vn terms, uh, that describe, uh, the Ladybugs, the Aphid populations are constant in X. And that's a big simplification for the analysis. And the other thing is, this is a cooperative system. So if you have more of, uh, Aphids on patch, uh, n, then the total population goes up. The denominator in the, uh, the equation term goes down for patch n, if n is different from n. That gives you a cooperative system. That means you can use monotone dynamical systems therein, you know what's going to happen. It's going to behave as well. So in particular, uh, in this setup, we can do some math and say that we're going to get, uh, under appropriate conditions on the coefficients, a nice, uh, positive equilibrium. Or we might get zero if, uh, the, uh, the diffusion rate's too high or the growth rate's too low. But the dynamics of that will now be relatively simple. It's easy enough to understand. So, going to the application, we wanted to get area effects, uh, for Aphid populations. So what we wind up with there is a, uh, single reaction diffusion equation, constant coefficients, Dirichlet boundary condition, um, which, um, has been well studied. In fact, uh, that's the equation where I got started in this business. Um, so we've been thinking about that for a few decades by now. Uh, it's pretty well understood. You know, it's a PDE, but we know what's going to happen. It turns out that after a couple of pages of math, which I'm not going to show you, um, that in certain cases, if you take that patch network and you assume that you have a larger scale, so there's another spatial scale there, a larger scale so that if the large region that we have all the patches increases, there are more ladybugs in that larger region looking around. So if you, if you had five patches, you'd have a certain number. If you have 20 patches, that's going to encompass a larger area. Or if you have one larger patch, that may encompass also a larger area, uh, just in terms of what's available. Uh, so that's, that's an assumption. There's, there's, the area gets bigger. You get more ladybugs interested. Um, but if you have that, then you can actually, uh, go back into the equations and, and, and see what's happening to the density, uh, of the ladybugs on the patch. If you pick out one patch and let it grow, um, what you get is a plot that looks like atha density versus area that actually, uh, goes up for a while and comes back down. So that captures the notion that if you're on a larger patch, you might have fewer ladybugs. And it all has to do with the spatial dynamics of the ladybug population. Um, and you could probably do this by keeping everything on the same time scale, but A, that would not be accurate biologically, and B, it would be incredibly harder. So in this case, it's fortuitous that we actually have separate time scales, which you might think is a complicated factor. Um, it's also fortuitous that we have patchy aphids and continuous ladybugs. So the aphids are living on patches. But, but, uh, the ladybugs are flying around. They see all the patches and then they, they land. So the point I'm trying to make is the fact that we have different spatial structures in different time scales at first glance, uh, was a little off-putting, was scary. But by judiciously picking scales, taking limits, and so on, we were able to get the model down to something reasonably tractable. So this is why it was a convenient complexity. Uh, both of those features allowed us to get simplifications and allowed us to get what we wanted. So anyway, that's part one. Those are the aphids and ladybugs. Um, part two, evolution of dispersal. This one is newer. So in the kind of dispersal things that I think about, dispersal models, um, I'm interested in seeing how the way organisms disperse influences, um, their population dynamics and evolution. And so I wanted to have a solution of dispersal. And now, um, this is motivated by thinking about, uh, the mammals, um, uh, that are medium large in terms of the exact time scales. But, but if you have a different kind of organism, you might have different, uh, specific scales. So something that's slow for, for a moose might be fast for a fly. Um, but this is, but you'd typically still have this hierarchy of scales and questions. So individuals do random walks step by step, but if that's time scale, that leads to populations moving around. So you get population level dispersal. Um, population interactions then occur, things reproduce, so on. And, um, then eventually, perhaps there is evolution. And so I'm, I'm thinking about something like, um, deer or wolves or something like that. So you have those kinds of large mammals. These things are really typically going to be happening, uh, at different scales. Things will reproduce once a year, move around quite a bit during, uh, the course, uh, a few hours or days. Um, evolve maybe on a longer time scale, maybe much longer. But we can take this stuff apart bit by bit. So this is, uh, I'm keeping probability in view. I'm not going to do anything with it, but I'm going to look at it. Um, so thinking in a really crude way about random walks, um, you, you can take a really simple, discreet random walk, um, on, uh, a, a delta x spatial scale, delta t times scale, and do a diffusion limit to that to get some type of diffusive, uh, equation describing, uh, dispersal. So the first thing we do is we scale delta x and delta t by the standard diffusion limit, and we go from our random walk to a diffusion equation. But the nature of the random walk at the delta x scale determines the nature of diffusion that we can get at the scale of x. So if we have, uh, completely independent, uh, uh, uh, of delta x, uh, then we get a normal fixed-law type diffusion. If we have, uh, the delta x scale, uh, depends on the departure point. That is, uh, the probability of moving depends something, uh, somehow on, on the place that you're leaving. You get another form of what we've seen before is ecological diffusion. That's also the form that you get if you go to something, uh, like a Fohr-Planck equation from an STE. So it's got the same structure. So that, that one's, that one's probably my favorite. Um, but I'll use whatever, uh, I get. And the, the last one is, uh, conditioning on the arrival point, and that's messier, but it still gives you, uh, a, a reasonable kind of diffusion equation. Uh, so these are all possibilities. The key point though is not the details, but just the fact that the, the nature of the diffusion process that you get depends on the nature of what's happening at that micro level. Okay, so we take the limit, and that gives us, uh, population that can go in different ways. And so these different types of diffusion, uh, give you different things. If you have a closed environment and you look at what happens in the equilibrium distribution, which you'll get with a no flux boundary condition, if it's fixed law, you're going to get constant. If it's, uh, um, either of the other two types of diffusion, you get something that's related to the spatial, uh, variable in terms of the diffusion coefficient, or motility coefficient probably is correct terminology for, for case one. I don't know what the right word is for two, but, uh, whatever it is that you, you can get, uh, an asymptotic form that, that varies with x. That's important for what we're going to be doing next. So distinguishing these different possibilities will be important, important for the next little piece of what we want to do. So what happens at the basic scale in terms of patterns you can get at the next one. And so, can I get to the next one? Okay, so now I'm at the population level, and I'm going to look at a movement model. So the classical model from the reaction diffusion viewpoint, you've got some kind of movement, and you've got some kind of population dynamics, and they happen at the same scale. Everything's happening on the same scale. So, um, the last time I was here, um, I was doing things like that, Otto Dieckmann caught me and said, these things don't happen at the same scale, which was the source of, of the work that we're doing. And so, if we look at the time scales, that suggests that we let the, uh, dispersal process equilibrate, which because we've got that no flux boundary condition there, will give us some kind of distribution. Basically, it'll give you a distribution of where you expect the population to be after a long time on the dispersal time scale. And that go to equilibrium. And it turns out that this type of differential operator, um, has the same property as, um, an M matrix or whatever terminology you want to have, a matrix of positive or at least non-negative off diagonal terms. So, there's an infinite dimensional version of the Peron-Ferrinius theorem that tells you, you've got a principal eigenvalue, you've got a positive eigenfunction, and all the other eigenvalues, uh, are smaller. And in fact, in this case, it turns out that the principal eigenvalue will always be negative, it will always be zero, because you, uh, have no flux and things are just moving around, so there's no change in the population from dispersal. That's an assumption that, that dispersal doesn't have any energy or other kinds of costs. But it's a popular assumption. Um, okay, so now what we're going to do is, uh, look for a form where we can separate. So, if we do separate, we think we should get, uh, a form, uh, this, and here, yes, here it is. Uh, you should go to something that looks like that eigenfunction times, uh, something that might be varying now on the population dynamical time scale. So, we're up to population dynamics. And, um, so, we look at what's going on there, same, same thing as on the last slide, just continuing. Um, take the onsots and, uh, calculate it a bit, just write down the equation, plug into the equation, and, uh, use the fact that we've got an eigenfunction here to get rid of that term, integrate, we get something that looks like that. So, that's our logistic equation for a single population. Um, and I'm taking a very simple one. M is supposed to be local growth rate. Um, I'm taking that particular form, M, if you factor out M, you see that M equals R equals K. So, there's a simplifying assumption that everything's perfectly correlated in growth and so on. It usually, it usually won't be that, but it simplifies things. All this stuff works in a similar way if you have a full logistic equation, but it's messier. So, just for simplicity, we chose that you could do more complicated things. Uh, as an aside, if you look at what happens in this situation where you look at the average across space of your population, you can write down an equation for that. And the equation that you get for that is going to be the average of your growth rate slash carrying capacity plus a covariance term. And this type of relation is, to jump them off point for Peter Chesson's scale transition theory and related ideas. So, you can get scale transition theory out of this as a byproduct at this particular level of, of the scalings. That's just an aside, but the point I want to make here is that going through this process of working your way up from the micro scale to the macro scale or from the short time scale to the large one, uh, spins off interesting stopping points. So, you could stop here and rest and do some analysis on this, uh, kind of situation should you wish to do so. Um, but we're going to keep on going. Now to population interactions. So, we're going to do the same thing now with Lack of Volterra. This actually raises an interesting question. If things are not Lack of Volterra, which I'll return to in a minute's, well, I have, I actually, I'm using up more than I've got, but I'll be done soon. So, same trick system. Uh, we've got now two competitors, Lack of Volterra. So, what this does is it reduces a system PDEs to a system ODEEs, but there is still the spatial structure remaining because these fees are eigenfunctions. They're carrying the spatial distribution of your populations. And so, if, if, if they're separated, if one, one's big over here and the other one's big over there, then that interaction term is going to be small. So, you can get the idea of coexistence mediated by spatial segregation out of this. And you don't need any PDEs to do it at this level. You need PDE to get you to fee one fee two, but you can do things with ODEEs in the, the single time scale case. You have to solve PDEs to get. So, that's kind of nice. Uh, now we go to evolution. And we're going to do adaptive dynamics, a longer time scale, a subject of adaptive dynamics. And, um, many people get the idea there. We'll see it. Um, so we write down the equations for invasion, pairwise invasion analysis. And so we can, actually, it's ODEEs. We can write down algebraically the invasion coefficient and decide if it's positive or not. And, um, there's a limitation. You can't distinguish between fast diffusion and slow diffusion because it's all fast, because that's the lower time scale. So we scale that out. But you can distinguish between diffusion and diffusion with abaction, because those give you different spatial profiles. Um, so we go through and calculate just to comment. It turns out sometimes if you choose your coefficients right in the PDE, you can exactly match, uh, the resource profile. This gives you what's known as an ideal free distribution, which is reputed to be a good thing to have. And in this set up, in many others, it turns out that it is. Um, and so now I can actually verify that indeed, if I write down, uh, what the dynamics are of something that is using ideal free and something that is not, I get that picture. Uh, the guy, you start as the one who's using ideal free distribution, and you see you get, uh, interestingly, neutral stability at that equilibrium, but you can still show quite easily just by calculating. There we go. Uh, let's, uh, no. Okay, I'll stop here, uh, because I'm out of time. Anyway, here's what it is. Uh, you can figure out what's going on in this just by using Hillter's inequality, um, on this business here. You get the positivity. Now, you can get that in the single time scale case, too, but it takes a few pages of nasty PDE computations to do it. So phase plane is nicer. And one last comment is just that in both of these cases, you could add an additional time scale to set on top of that. In the evolution case, you could put a larger time scale where because of climate change or something like that, the coefficients back in your interaction or your population growth changed at a lot larger, slower scale. Or in the aphids and ladybugs, you could run from year to year. In fact, we did something running from year to year. I calculated some energy budgets to compare foraging strategies. So you could do comparison foraging strategies for the ladybugs out of that one too, should you wish to do so. Anyway, one last plug for a band. This stuff, the first place where these equations were written down at the slowest time scale of ODE's, was on the airplane coming home from a previous bath meeting. Thank you. Thank you. APPLAUSE Well, it was either completely useless or really confusing. Definitive. So do you get that? I think for the adaptive dynamics part, I think it's a little fast. I think quite good of my input. Is it where you have these strategies that are essentially giving you an ideal free right at the right time scale? That's right. Those are the ones that are interesting. That evolutionary stable, right? That's right. But is that in a weak sense? In other words, there's a whole collection of these things. You could be doing a random walk as space of strategies that all give you an ideal free. That's right. So the ideal free things are neutrally stable with respect to each other. So you could have sort of neutral coexistence of things that produced it in different ways. You get ideal free coalitions. So yes, that is a feature. And is it also neutrally stable to even though, although I guess it's diffusion, so it won't be, to other strategies really, it won't be a truly an ESS. Right. Relative to other strategies that do not produce an ideal free distribution. So if some strategy matches the resource profile exactly, then you get that neutral stability with any other one that does that. But if you don't, in fact, that actually comes out of, that's holders inequality. Right. So anything that gives you, what this is, if you had equality there, you'd be saying you have another strategy to produce an ideal free distribution. If you don't, the holders inequality says you lose. Right. So with your three cases of kind of special, dependent diffusion situation, do you have, I know, maybe, like we've run some examples, which were very each one would be appropriate. So you have these three cases of special diffusion. So, and you said it depends on microscopy. So I was wondering if you have any cases in which one microscopic distribution would be able to do a general operation there? I haven't really tried to, and that's fairly high level theoretical stuff. I haven't tried to find specific cases. However, I would say that if you think about, simpler organisms that may not be paying too much attention to the environment or may not be able to sense what's going on in the environment will quite possibly do an ordinary diffusion. Those that can tell how their environment behaves may. So there are examples that my friend Bill Fagan has looked at with looking at animal tracks. They color gazelles and watch what happens. And those guys do seem to track resources very effectively. The situation there is a little bit different though, because they're doing essentially an arrival point. They can see rain in the distance. They can see where the rain, they go where the rain was, because that's where the grass will be. And so they're doing arrival point diffusion. Playing ordinary diffusion seems to happen with both them and other organisms. There are quite a few data sets. You actually have large spatial scales. Things are doing more or less a pure diffusion with a fast rate, because they're searching. So if you have a typical track of an organism, what you get might be something like this, which this part is where they found some resource. So what organisms actually do is quite a bit more complicated. They actually may use some mixture of these things. But if you have a situation where there's no resource patch around, you'll have somebody diffusing. If you get close to the resource patch, according to the stats that Bill Fagan and his lab ran, you get something that looks like one-styled woolenbeck with a center point somewhere in the patch. So that's actually, that and that one looks somewhat like a departure point in terms of the structure of the model you get. Or you can also get that by thinking about it in terms of advection and diffusion combined. So I guess the short answer is, if you start with Mongolian gazelles, you can see these different dispersal strategies being used, but they, I'm cheating, they don't use a single one. They switch. And this has not been investigated much, but that's yet another time scale to put into the mix. What's the rate of switching between strategies that these guys use? And can they get anything close to ideal free by doing that? But that's a whole other talk. So I'm wondering what the relation is between here, the way you build things up in your second example, your second case, and somewhat more, let's say, more classical rationalization theory for reaction diffusion equations, where you have these, you build these time scales in your original model, but then when you do the expansion, you also realize that, okay, first you have to solve an equation that contains only the movement part, and then you plug it up into higher things. Can you think of a little bit? Okay. So these are different, they are different mechanisms of doing spatial averaging, I think. In how much, in the modernization theory, typically you're going to possibly, you've got some kind of patchy structure at some scale. And then you're going to exploit that patchy structure and think about things moving between patches, if you will, and you're going to take a limit of that. And that's another, my apologies, I should have mentioned this, but I was feeling pressed for time, so thank you for asking. Frithjolf and friends have recently been doing a lot of stuff where you've got active behavior at patch interfaces. So you have different habitat types, things may move in different ways across the interface. And that's another place where this kind of thing could be done, but if you're in that setup, you want to use homologonization, because you're thinking you've got a truly patchy environment and you're averaging that way. What we're doing is we're thinking we have a truly continuous environment, there's no patchiness in it at all, and we're averaging by doing this PDE solution. So you'd be doing the same thing in different ways. So if you have that patchy structure, then that's another route, possibly a similar route, to get to this point, because you don't have to solve the original PDE. But you do have to have that sort of repeatable patchy structure to be able to make that one work. So they have sort of different realms of applicability, depending upon what you think is happening at what I call the X scale in space. They reflect different types of models for different kinds of spatial structure. That's an important thing too. That's another thing to keep in mind if you're doing spatial structures, what is the nature of the structural variability in your environment, patchy versus not patchy, and some things are good tricks in one, others are better tricks in the other one. I'm not sure it's a complete answer, but that's what I can tell you briefly. In your second example, you separate the time scale of the diffusion and the population dynamics, assuming that the diffusion is fast and the population is slow. And I think it works only if your patch is not too large, because otherwise it takes a long time to diffuse from one end of the patch to the other one. So actually it can be pretty strict restrictions on the size of the area you're looking at. But is there a way to work around this? You say, okay, I can ignore the next higher kind of eigenvalues of this. Well, so you can certainly model all of this stuff and get these kinds of results. If you assume that you have dispersal and population dynamics going on the same time scales, that's not a problem. You just leave it in the reaction diffusion abduction form. If you go back to the original reaction diffusion abduction form, where you've got something like ut equals upon su plus benefit x u times u. Now you've got diffusion happening on the same time scale as you do population dynamics. And it's quite possible to do all of this type of analysis that I described just working with equations of this type. In fact, that's most of what I've done for the past 30 or so years. I've got a book about Linda Skeeter-Cantrell. So the short answer is sometimes you might want this stuff on one time scale, but it's less convenient because now if I want to understand what's going on, I have to solve PDEs and do complicated analysis, which is great for writing papers, you can put math journals, but it's effortful and it may be more effortful than it needs to be for some systems. So the point of looking at these scale separations is that initially it looks like it might complicate matters, but indeed in these particular cases where you can do it, it may simplify matters. But the results are different, right? The results are different. No, the results are the same. So in other words, if I have a dispersal operator here, I have a dispersal operator there, and I have a competition again in the same form, and m equals m of x, I've got still no flux as opposed to the environment. If the differential operator here allows my population using that dispersal strategy to produce an ideal free distribution, then that strategy is evolutionarily stable, whereas there's other strategies that do not. So if this one does that and this one's pure diffusion, then you win. So that result actually holds true also if I look at network, and I've got whatever you want to call it, discrete diffusions, or if I've got an integral differential system, so these are non-vocal operators, or it's true in the case where you have integral difference, but there are more restrictions on what's going on, and that's not written yet. That one, I'm still working with Joy Xu on the details, but every system that we've looked at from this adaptive dynamics viewpoint seems to have the property that ideal free is going to be evolutionarily steady with respect to other strategies that do not produce ideal free. So this is the older stuff. The key point with the newer stuff is you only have to solve a fairly simple PDE that tells you what the spatial distribution is being produced to get information about what's going on, and it works at some appropriate spatial scales with some appropriate separation of time scales. You always have to worry about the logical modeling. Am I modeling my system at the appropriate kind of spatial scale for that system? Is this model that I'm using appropriate for the system I'm looking at and the question I'm trying to answer, and different models, different systems, different questions? I'm going to allow one more question since we started late, and Mark has ended up first up, but he could at least wrestle. With the lady, ladies and gentlemen, David, at the beginning you get this interesting result that seems to be an area that larger the total density seems to go down. Yeah, not necessarily, but possibly if everything is right. I didn't get a clear-intuitive explanation like you went through the mathematics, but is there one liner you can give that describes this model? Yeah, if you looked at a situation with no ladybugs, or if you just looked at patch area, what happens is as the patch gets larger, if it had no ladybugs, no aphids on it, it would be more attractive to the ladybugs. So what you would expect would be if the aphids build up their density, which they can do pretty quickly, on patches, then initially you'll start with densities that are high on the big patch, they do their business, but then that attracts the ladybugs, and the ladybugs eat them down more than they would if they were on a smaller patch. So it's a matter, the mechanism there would be aggregation of the ladybugs on either larger patches or on larger concentrations of aphids. So predator aggregation is the underlying ecological mechanism that's doing that. Thank you.
|
One of the features that distinguishes biological systems is the wide range of scales in time and space on which processes and interactions occur. This is a form of complexity, but it is one that can sometimes be turned into an advantage. I will describe models for a couple of systems where my collaborators and I have found that to be the case. The first (from long ago) is a system with ladybugs preying on aphids. The ladybugs (which are highly mobile but reproduce slowly) experience the environment as a system of patches, while the aphids (which are much less mobile but reproduce quickly) experience each patch as spatial continuum. The second (more recent) is a system aimed at describing the evolution of dispersal. Dispersal starts with the movement of individuals, which can be observed by tracks or tracking and described in terms of random walks. That then produces spatial patterns, which then influence ecological interactions within and among populations. Those in turn exert selective pressure on traits that determine the spatial patterns, and finally the selective pressure together with the occasional the appearance of mutants results in the evolution of dispersal traits. All of these processes can, in some cases, operate on different scales in time and space. It turns out that this when this occurs it can be exploited to produce relatively simple models in some situations. The older research I will discuss was conducted in collaboration with Steve Cantrell; the newer was with Steve Cantrell, Mark Lewis, and Yuan Lou.
|
10.5446/57610 (DOI)
|
And I want to thank all of you who make these discussions that we've had over the last number of days. So, so interesting and so inspiring. And I hope something similar will happen to my talk as well, of course. So, I decided to talk about a little pet project that I have and that I like to say it's in progress and sometimes it is in progress and sometimes it's resting. But what I'm hoping is that from your feedback I will be able to better gauge whether that's interesting and whether it's interesting for which community and so on. So, I realized way too late that the title is Way Too General. Let me tell you a little bit more what I want to do. We have natural and human disturbances in ecosystems and the kind of transience I want to talk about is the question of how does the system respond to a one-time or a single disturbance, a single perturbation. And that could be natural or it could be human cause. So, that's the question, single perturbation. And so, a more mathematical picture of this is here's the time when a perturbation happens and then in the simplest case if I have a steady state community I want to trace the distance from the steady state and how it decays back to that steady state over time. We know it's stable, I assume it's stable so it has to decay the question is how does it do that. So, in a simple two-dimensional system in this case a stable node you have two eigen directions with negative eigenvalues and you get in this case let's say a fast and a not so fast decline. But the system could do other things. If it doesn't start on the eigenvectors right it could first have a fast transition to one of the eigenvectors and then slowly decay along that one. And in terms of distance this could mean that you're first going away from the steady state before eventually approaching it. And so, there are questions like how does in general how does an equilibrium community respond. For example, how long will it take to return that's an old question and how far from equilibrium will it get along the way that's also an old question but not quite as old. There are many other possibilities for example if you have a stable spiral or focus then a distance from steady state could look something like this or if you start somewhere in between it could also first decay and then increase and then eventually decay all of these things are possible. And as I said so the eventual decay rate is something that people have studied a long time the initial growth of the clients of this part near zero that's the part that I want to focus on in this talk. And then the maxi will possible deviation something like this here is nice if you can get it but typically harder much harder to do. So the concept is that I'm mostly going to talk about this reactivity that's old for steady states and ODE's it was publicized by Luberton Castle and the paper in 97 and that paper is very well cited. They have extended that to steady states for maps finite dimensional maps and I will use some of that later. And an interesting fact is that reactivity is a necessary condition for diffusion driven instabilities in reaction diffusion equations. And that's actually kind of how I got interested in that because with the formal postdoc we had studied diffusion driven instabilities in impulsive reaction diffusion equations. And so the obvious question was okay is something similar true for impulsive equations is there another kind of necessary condition of reactivity in impulsive equation that would be well necessary for diffusion and diffusion instability. And so we got interested in what that is but if you have a system that is impulsive you don't have a steady state anymore which you linearize. So you need to look at some other states impulsive periodic orbits. And so we needed somehow a theory of reactivity for non-equilibrium dynamics. And then we found this paper by Vesipa and Rudolfi reactivity for periodically forced models and as I was reading this I thought there's something missing. And so after introducing you some of this stuff I want to introduce you to their ideas and show you some new results and then obviously the next question is what if you have not externally generated cycles but internally generated cycles can you do something similar like reactivity. And these are really just a few thoughts I haven't gotten too far on those. These kind of sneaked in from yesterday. This is about 300 meters below the summit of Montgrandeau. This is Mount Assinibillion that we saw in the distance. Here is Sulphur Mountain and down there is Ban. So what is the mathematical setup? We have an ODE in some finite dimensional space. We have a steady state. We linearize at the steady state. We got a linear equation. The linear equation variables are always denoted by y in the talk and we assume that this state is stable so the real part of the dominant eigenvalue is negative and in particular all the real parts of all eigenvalues are negative. And then resilience measures the return rate to steady state so that's old and that's the negative of that dominant eigenvalue of the real part that tells you in the long time how fast or how slow our solutions decay to that. That's kind of time goes to infinity. Reactivity is the part at the very beginning. So you take your deviation, your initial perturbation from the steady state and you ask in some kind of a norm what does that do over time at time equals zero? What is the initial slope of that? Because it's a linear equation you normalize by the norm. We can start. Sorry, did you want to record? It is. I have turned it on. And you call this thing and then you maximize over all possible deviations. And so this is what Nuber and Caswell called the reactivity and a steady state, stable steady state is reactive. If this number here is positive so if the initial increase, if that's initial increase and it's not reactive, if all solutions initially decay back towards zero. Now this is only the initial part. If you want to do this for all times you take what they call the amplification envelope. So you have the maximum after deviation from steady state again normalized. And what I want to point out here is that this maximum is taken for every single T. There is not one initial condition that gives you that maximum for all T. For any given T it might be a different initial condition. So let me illustrate that. Here is my linear equation and here I'm plotting the solution, normalized solution over time. So a non-amplifying solution would just go down. Monotonically approach the steady state. An amplifying solution are the red one and the orange one. And so the red one is the one that gives me the steepest slope at the beginning. That's the one that gives me the reactivity. And the orange one gets slower here but higher there. And so the amplification envelope is the point wise max of all these things. Turns out if you plot the logarithm of the amplification envelope then the reactivity and the resilience are just given as the slopes at zero and that is infinity. So if you could get that curve you'd know everything but also often that's hard to get in any other way than computation. So few observations. Scalar equations cannot be reactive. In scalar equation you only have a single eigendirection, single eigenvalue. It's either negative or not and if it's negative then that's it. More interesting, reactivity and resilience have no obvious correlation. And I mean by that is you can have a system that has certain reactivity and certain resilience and you change a parameter and let's say the decay rate to infinity becomes steeper so it's more resilient. That doesn't tell you anything about the reactivity. It could be more or less reactive. And that's something that Nuber and Caswell in their original papers say that this is something that surprises them still after they've been thinking about this for a while. It depends on the chosen norm. That's important. In particular it depends on scaling because if you first scale and then apply some norm that's different. So this is one of the cases where you cannot advocate for non-dimensionalization. You have to actually calculate reactivity if you want to know it for the full parameterized model and not non-dimensionalize. Can you just do a different norm? You could. Careful. If you do non-dimensionalize you have to adjust the norm accordingly. So that's the point. If we take the L2 norm then much of the theory becomes really nice because then the reactivity can be calculated as the dominant eigenvalue of a symmetrized version of this matrix. If you take other things then this is not true and then we don't know of a way of easily calculating the reactivity other than just explicitly writing it down. In that case you also can actually write down the amplification envelope as the matrix norm of this. So for ODE's all of these things can easily be calculated with any kind of numerical package that does that. Okay. Here are two simple examples. Predator-Prey-Lotca-Valterra type or Predator-Prey-Leslie-May type model. In both cases if you get a positive steady state you linearize then you get a 2 by 2 matrix and then from this linear formula here you can calculate that a steady state is reactive if and only if this condition between the matrix entries holds. So what you see in these Lotca-Valterra type models because of the linearity of the predator equation you always get a zero down here at the steady state. And if you have a zero down here then this is almost always satisfied except for when A2 is equal to minus A3 so there's a very thin line where the model is not reactive but otherwise it is definitely reactive. In the Leslie model that's not always the case for some parameters you can have a fairly large region and parameter space that's non-reactive. So that's just to get a little bit of feeling for it. Obviously Predator-Prey models or Predator-Prey relationships are the one that can get you reactivity. Competition typically doesn't but it's hard to make a really general statement. Okay what happens beyond equilibrium? So for example you might have an externally forced system seasonal let's say or you might have internally generated Predator-Prey cycles. What happens to reactivity? And one of the questions that I'm thinking about in this context is is there a good time to perturb an oscillating system? Like if you know you have to somehow do something to that system or if there is culling or whatever there is harvesting is it better to do that in the winter, in the spring? Sometimes you can't do it in any season that you want but if you can is there a better or worse time? So it turns out the seasonally forced systems are much easier to deal with. So now same setup except for our vector field is now time dependent and it's periodically time dependent. Then we can think of we can help to get a t periodic orbit. We can linearize it that we get the t periodic linear system and we assume that this periodic orbit is stable. And we can calculate stability at least abstractly via the Poincare map and that's independent of the choice where we started our linearization. So this is where the paper by Vesipa and Rudolfi comes in. They studied these things and so here, hard to see I guess, the black lines here correspond to a periodic orbit and the red curves correspond to perturbations. So in one case there's a perturbation and it seems to get closer to the periodic orbit very quickly. Here is the face plane of that, the periodic orbit is the circle and these are various initial conditions and you see they all go close to the periodic orbit right away. For different parameters set here's the periodic orbit, here's the initial perturbation and you see it goes very far away before eventually it approaches that periodic orbit and in the face plane that looks like this that you see these very far in the altunorm, yeah, altunorm very far away and then eventually going to here. So the phenomenon is there. How do we capture it? These guys defined the reactivity and the amplification envelope just as before so you take the linearized, the solution of the linearized equation, you look at how far it goes from equilibrium or from the periodic orbit I should say at your initial time and that's your reactivity. Similarly the amplification envelope and what's really quite amazing in this paper is the kind of geometric insight that they give about when a system is reactive and when it is. It's a beautifully written paper and then the second big part that they do is they write a numerical scheme for how to calculate this amplification envelope because this is much harder now. When I read that paper I thought they're missing something and what I think they're missing is the dependence on the initial condition here. So does it matter where you break up your periodic orbit and where you perturb that? And I cannot see in the discussion of their paper any indication that they have thought of that so I had to start and do my own little simulations. But I think what you have to do is you have to define a local reactivity depending on where on that periodic orbit you try to perturb and then you get a local amplification envelope and these have the same relations as before so you can calculate the local reactivity from the eigenvalue of the symmetrized matrix and you can calculate it as the slope of this guy here at t0. To show you that there is actually a difference let me run you through a very simple example. This is the periodically forced logistic equation. I make the carrying capacity dependent on time. I can write down an explicit equation for the periodic orbit. I can linearize and because the linearized equation is one dimensional it has an easy solution because of that really simple dependence I can explicitly write down the total amplification the local amplification envelope everything and you see that depends it depends on your initial point on the t0 here. So the local reactivity is this and you see if gamma here is small then the local reactivity is positive and if gamma is large then the local reactivity is negative. It's small and large or relative to k over 2 so what I'm plotting here is the periodic orbit in solid and k over 2 and dashed and wherever the periodic orbit is below k over 2 the system is reactive locally and where it's above it's not and you see this here this is the amplification envelope you see if you perturb here it first increases away from the periodic orbit and then goes down if you perturb it here then it goes down to the periodic orbit directly. So it really depends on where you cut up your periodic orbit to do that. So at that point then it was clear to me that what you have to do is you have to look at the period map and you have to think of okay does it amplify or not over one period and that should be the right measure. So here is that the stars here show you what happens from one period to the next and it looks like from the point of view of the period map that it's not reactive is going down to zero straight away. So generally I like to have measured that's global for that periodic orbit in terms of reactivity and the two obvious things that I could found is take the maximum of the local reactivity and that's in line with worst case analysis because all of that reactivity you always take a max over something so it would make sense to take the max but maybe it's too much so that's where the period map comes in that I just said. You define this period map you get from your periodic orbit the period map you linearize now we've got linear map the stability of what the periodic orbit turns into a steady state the stability of the steady state is given by the eigenvalues of b and it's independent of t0 where you break it up. Now this is a discrete dynamical system and I mentioned to you earlier that Nubet and Kassel had developed a theory of reactivity for discrete system so let's just apply this here for that linear map we define the period reactivity as the norm of the perturbation after one generation divided by the initial condition take the long for good measure and you have a period reactivity. If you take the L2 norm then Nubet and Kassel have or in this case Kassel and Nubet worked it out the period reactivity is actually this guy so it's basically the largest singular value of this matrix B but it turns out that depends on t0. So your periodic orbit if you go along the periodic orbit that punkering map you take it at a different time you get a nice transition between the two but only if that transition is orthogonal does it drop out here in general it doesn't. So let's look at this equation the force logistic equation that I had before we can explicitly write down what the punkering map is and you see it depends on t0 here but the period reactivity in this particular case does not, you can explicitly calculate this does not, it goes down by this factor here every period. And so the first observation that we have here, first observation that we have if you have a periodic orbit of a scalar period reinforced equation then this cannot be period reactive the same reason why a scalar, why the steady state of a scalar ODE cannot be reactive there's just not enough dimensions around because the punkering map is a one dimensional thing if that has an eigenvalue inside the unit circle so that it's stable then that's the only eigenvalue it has and so it has to go down every period. That then begs the question if you have two dimensional force system can you get something that's period reactive and the answer is you can. So here's the Lotka-Baltair model with this very simple forcing only at one place and everything else as simple as I could think of and I use a kind of a sinusoidal forcing so the black curves here are the periodic orbit for the prey and for the predator and the red and the blue are the time maps of perturbations at different times. So here I chose one at time 95 I chose one that happens to be reactive you can see this gets very far away from the periodic orbit and at 97 I chose a different one that happens to be non-reactive that with every iteration gets closer to the periodic orbit. So is there a relation between local reactivity period reactivity when you get that or not. So what I did is I just varied this parameter m here and I calculated local and period reactivity for the entire periodic orbit every time and this is kind of my favorite slide of the whole talk. So there are four cases for increasing m each case shows you the periodic orbit over one period prey and predator and then over that same period it shows you local reactivity and period reactivity. So in the first case local reactivity and period reactivity are both positive all the time so it doesn't matter where you perturb any time in a periodic orbit you run the risk of going further away immediately and after one period no matter where you do it. If you increase this m then you get a point where local reactivity is positive everywhere but period reactivity changes sign. So over a period you might go down no matter what you do but locally you can go up immediately. This I find is the most interesting local reactivity is non-negative all the time period reactivity is non-positive all the time and here both of them change sign. So I haven't found a particular relationship between local and period reactivity what can happen is that locally you're going away from that periodic orbit but in such a way that if you average it out over the entire orbit you're actually getting closer. So now comes the more. Okay thank you. Now comes the more speculative part of the talk is that can we push this through to internally generated cycles and what's the big difference? The big difference is if you have a periodically forced cycle then that is unique including parametrization and you can't you can't move it it's fixed. If you have an internally generated cycle periodic orbit you can change the parametrization along with periodic orbit. What that means is if you have the same thing here's my periodic orbit in red here's a perturbation in blue and this goes further away from the periodic orbit this is clearly reactive. Here is another perturbation at a different time and this is not so clear what it is whether it's reactive or not but it's at least not going a lot further away from the periodic orbit. But if I look at this first place here yes it is approaching the periodic orbit but if I think of where I disturbed it where I perturbed it then the solution from there and from the perturbation they end up chasing each other around the periodic orbit. And the blue will never catch up with the red. So if you're thinking of distance from the periodic orbit as a set yes that's going to zero but if you're thinking of distance in phase that doesn't need to go to zero. And so then how do you define reactivity or in particular period reactivity? Do you care about the shift in phase in your system? Maybe you do maybe you don't I don't know. If you do then certain things can happen or certain things cannot happen. So here is kind of as far as I got with those thoughts. The picture I showed here I calculated the local reactivity is positive everywhere. Is it period reactive? Well so first the period reactivity is never negative that's what this picture here shows you. If you happen to perturb in the direction of the periodic orbit we know that that direction is an eigenvector with eigenvalue one of the period of map and so my period reactivity can never get lower than that. But what if I eliminate that just like we eliminate that direction in the stability analysis when the Poincare map we have an n dimensional system periodic orbit we get an n minus one dimensional Poincare map. Well okay so if we do that then my two dimensional Rosalte-Scheich-McArthur model has a one dimensional period map and a one dimensional map as I said at the beginning can't be period reactive or can't be reactive so this cannot be period reactive. A tritophic food chain can be period reactive and I wanted to show you pictures here but I couldn't get the MATLAB to work but I've seen it so. I wanted to just last minute do the pictures and I didn't get to work but it can be. So the reactivity of steady states depends on scaling and there's no relationship to resilience so there's really two different quantities. The first periodic orbits I think we have to distinguish between local and period reactivity and you need at least two species to get that. Reactivity of autonomous periodic orbits the big question is do you care about the direction of the orbit do you care about phase differences or not and in that case it then requires at least three species to interact before you can get that. There are no simple formulas so I have no hope that this theory will ever get as much used in ecology as the Nubian and Catharine gets for the steady state and even numerically it's not always easy to do. I thank you and I'm going to do some very selfish announcements here. There's an upcoming book and there's an upcoming position at UOILA so I hope to get lots of good applications for that. Thank you very much. If you consider the normal forms of the hoverfication in the presence of periodic forcing and you have an Nubian circle with a pair of steady state points, one stable and one unstable and then phase shifts moving along the circle, phase shifts and from here you see that there is an obvious relation or difference between changing the phase in one direction or the other direction. One direction is not the unstable fixed point, in the other direction you do approach it and you can spend a lot of time there. I wonder whether this is a generic analysis or not. I haven't tried to apply this here, yes. That's a good idea. Thank you. I don't know who was first. Somebody? Okay. So I have maybe just added that this S1 symmetric orbit could be viewed as a rather big equilibrium so I don't think you would get any rectivity there. It can be reduced by symmetry reduction to an equilibrium point. In a two dimensional system? Yes. Yes, absolutely. In an in a dimensional system, when you have S1 periodic orbit, I have a question about your summary and your last argument. So you try to distinguish between forced periodic orbits and autonomous periodic orbits. Yes. In my viewpoint, there is no difference between them really. So I don't think there would really be a difference because when you take forced periodic orbits, you can always rewrite the system as an autonomous system. I know. So I wonder, can you give me some insight why you think there would be a difference there? I think there is a difference because time is the thing that we actually measure and according to which we live. And I mean, yes, I know I can write this at t.e.p.t.1. That's fine. No, no, no, what do you mean? I write two differential equations that will give me a solution, which is a periodic oscillation. I know. And there is no non-autonomous element at all. And they just have a skewed product. But maybe what you mean is harmonic periodic forcing, which is very different from periodic forcing. It's a very special case of periodic forcing. For example, a three-dimensional system that has a limit cycle and I appoint this to my original equations and instead of periodic forcing, I just take one variable from my three-dimensional product system and I stick it instead of the forces. I'm enforcing one system with another system. Yes. And everything is autonomous. And that would be equivalent to periodic forcing. I don't think so because you're not measuring reactivity in time. With what you're suggesting, if you just apply reactivity to the extended system, you'd include time reactivity, which you don't have here. I think time really here has a special function because that's the one thing, differentiating with respect to time and time equals zero. And that has a different effect. The matrices here are m by n, not m plus one. So you would have some artifacts from the appended system. Yes. That's a good point. All right. Thank you. So for the openness, is there a relationship to phase response? It certainly looks like that. But that's the part where we haven't made progress yet. Chris. When you said that it's harder to get with the competition, are you still having two competitors? Because usually with competition, you get the interest when you need rock paper scissors. I was thinking of two competitors. That's true. Yes. Rock paper scissors might be interesting. It might be. Yes. It falls in the last category, though, right? I didn't say easy. That's one of the ways to do it. Yeah, that's right. That's a good point. So the formula looks like the uniform of exponents, or the finite time of the uniform of exponents, or the reactivity. Because the formula looks like you take the ratio, then you do some maximization over time scale, and then you take a log that looks a little bit like the concepts related? Definitely not. I mean, Lyapunov expanded as independent of the norm. So how could they be related? Well, I mean, it could be in a certain skating they coincide, for example, with certain norms. Well, maybe a certain norm. Yeah, in general, a normal. Yeah. It works out. Yes, I mean, if you have a stable point, you can always choose an equivalent norm in which all of these things are decaying, right? That's not the point. I'm not trying to make a special norm. I'm trying to measure something in the field, a distance in some sense, a variation. And I'm interested in having lots or having few predators or prey. I mean, sometimes maybe I want to have a perturbation that makes sure that I never go below a certain threshold. And yes, I can make a norm along which the solutions will always decay. But that's not necessarily the norm that I need to measure whether these predators stay above a certain threshold. I think there's a strong connection between the two. If you get positive finite time, yeah, on the respondent, you get reactivity. That's it. They almost know. Because they diverge on finite times. That's it. Sure. Yeah. Almost you could. Yeah, one. I think almost a point to one. Please, yeah, that's a great one. But it doesn't work yet. That's the key point. Yeah. But one way at least it works. Yeah. But you're never looking at that way, right? Because you're always looking at the stable equilibrium. So I mean, yeah. Yes, I mean, I have. Yeah, OK. Andrew, let's ask you a question. In terms of ecological system management, the thing that you thought was important, so it's from the decisions, it can be important, depending on your plant frame. So depending on the particular type of species of plant, sometimes you can see the perturbation actually decreases. We don't need to do anything. But if you change, so it initially increases, we need to do something. But it's exactly for the same perturbation. I'm not sure it's exactly for the same perturbation. What reactivity says that if you perturb, let's say, if you perturb at a time where the orbit is reactive, then there is a perturbation which will initially go away from the periodic orbit before it decays towards it. It doesn't mean that every perturbation first increases and then decreases. But if we don't know what our perturbations do, then I would say if we want to make sure that the system doesn't get perturbed too much, we could use this as a guide to say, only try to do the perturbations in a place where the system is not reactive and not periodically reactive. And when people have uncertainty in terms of perturbation time problem, they have to do it. Yeah. So back to the normal question. You're focusing a lot on L2 norm, which I think is fairly artificial in an ecological context. L1 norm is more natural in some sense, because it's talking about just differences in densities. So what is known in terms of, are there any formulas for L1 norm? How hard is it to compute for L1 norm? And to what extent is L2 norm misleading with respect to L1 norm in a way that you get opposite signs of the reactivity? Yeah. So I can't answer those questions yet, but what I'm thinking of doing is the following. Once you get to a point where even with the L2 norm, you don't have explicit formulas anymore, it doesn't matter which formula, which norm you use. So what I want to do is, in the case of periodic orbits, I want to do exactly what you say, compare the L2 norm with a sup normal L1 norm, and say, OK, it seems to be more natural since I have to do it numerically anyway. I might as well do it numerically, and then see. And then what I think you get is you can get reactivity or not, depending on the species that you're interested in. You could say, OK, I don't care about the predator, but I want to make sure that the prey never gets lower than it is, so that's something that you would need a different norm for. So but if you do just one of the species, it's not a norm anymore. No, no, no. In other words, if you just take the absolute value of the first coordinate as your, quote, pseudonorm, it's not a norm. It's not a norm. But I guess you can define it. But you could even do it. And people have defined, so for steady states, people have defined reactivity for perturbations only to certain absolute coordinates and these kind of things. So I don't know how far I want to get in that. I think we need to take the question about the app and the functions into the coffee break. I see if I had the simple question. I just want to make sure that we are questions from anybody. It hasn't had a chance yet. So just a moment. Is there are there any other questions? Hang on. A very small question. Going all the way back to your luck of alteration where you have all these interesting different reactivity patterns for different values of app. Oh, yes. I guess my question is, is M special? Just go back to the equation so I can see what. Did you just pick M to perturb or is there some intuition why M is? I just picked M and was happy that with the one I get all the possible cases that you can get. OK. Yeah, quick question. Back to the relationship between reactivities and resilience and just that. I know that was what started by stuff with. How is that shown that there is no relationship? By example? Just by example. But it's strange to me it could mean that the relationship you break it down would be just complex or resilience cases. I don't know if a study that looked at finding possible correlations between the two. But I know that Mike and Hal made this big statement in their paper saying, really, these are two different things. And even we as authors of this often get confused by it. But that's the way that's shown. Just shows me many cases. I mean, as long as there is at least one case where this goes this way and that way, you know that there is no general. Yeah, that's true. That's right. All right, we have two minutes, I think, for Sebastian. I see the same question going back to reactivity in the equilibrium case. Yeah. This is some nice simple questions. And we all think about them. But I actually never realized it's called reactivity in ecology. And I somehow had always a simple condition in mind for what you call reactivity is just that the Jacobian is not normal. And that has distinct eigenvalues. And that's it. And it somehow didn't come through in the conditions that you were displaying. But basically, reactivity is generic. Because to avoid reactivity, you would have to have a normal Jacobian, which gives you orthogonal eigenvectors. Only when you have orthogonal eigenvectors, you avoid reactivity. So it's something typical that should occur typically. And somehow this condition didn't come through, or maybe it did in your, and that surprises me. And I don't know how that would generalize to periodic orbits. I didn't have enough time to think about this. Yeah. I don't know. I don't know. I don't know. I don't know. I don't know. So the, let's see. If you, where is this? You can do it with respect to the L2 norm, right? Then you can explicitly calculate it. So this here is. Maybe that translates to what I just said. But in terms of check, yeah. OK. Figure that out over coffee. No, wait. OK. OK. Thank you.
|
Human activities or natural events may perturb locally stable equilibrium communities. One can ask how long the community will take to return to its equilibrium and how "far" from the equilibrium it may get in the process. To answer those questions, we can measure the "resilience" and "reactivity" of a system. These concepts thus quantify one particular form of transient dynamics in ecological models. I will briefly review these measures and give some examples and known but still surprising insights. Then I will suggest extensions to periodically forced systems and periodic orbits in autonomous systems and examine some of their properties.
|
10.5446/57611 (DOI)
|
I'm going to talk about this show here and it's also my name, but in fact it's going to present the work of a large group, 10 people all together and very nice of the six of them are here in this room. So that was a working group supported by Inverse and many thanks to Inverse for that, it wouldn't happen without their support and I have to say that one of the directors, Suzanne is here, so extension thanks to her personally as well. So what I'm going to talk about is a brief outline. First of all I will introduce the program, what are the transients, what I'm going to talk about and then in doing that I will, the best way to do that is to show several, let's say, genetic examples and then I will discuss mathematical models of some baseline mathematical models that can generate different types of transients, long transients. Then I will discuss their possible relation to different points. Then I will probably not have enough time to look at spatial systems except for just future comments and then of course there will be some conclusions. So now transients of course are very general in ecology and we have seen that and heard about that many times already in these few days. So what transients can be of different type and intuitively when we hear about range sense, I mean, I'm thinking about something like if this is the population size against time, so we expect to see something like maybe like that, simple exponential decay to something that is sort of a stable or long term dynamics usually associated with the natural system or maybe something more complicated like overdown distillator or maybe with some more interesting like something like Fritz just talked about. So that's obviously transients and they're all highly ecological irrelevant but that's not the point of my talk. Transients are usually thought about as short although what is short and what is long is not at all clear. Apparently it assumes that we have a extra amount of time scale to which we compare to the duration of transients. But perhaps more importantly is that and that's what it is treated what we expect that transients are going to finish and give way to some sort of some sort of long term dynamics and usually this is thought about like asymptotic dynamics. Users are just associated with that. So in that sense long transients is a kind of ill-nature between two worlds but it actually happens and I mean there is especially for mathematicians it's always a temptation to start with formal definition. But formal definition is going to be elusive, it's not difficult to make and it's not going to be instructed before we actually see what's happening in real systems or models. So and now I'm going to show a few cases. So that's first case taken from an old paper by Sebastian Krober obviously and you see that so that's that's a very simple single species can just create non-spatial model right and we probably have the traditional transients associated maybe with initial conditions somewhere here. The system settles down on something that looks like sustainable chaotic regime, it's chaos and it's okay. We switch off the computer somewhere here, it's full clear. It's chaos right and if it would be done 40 years before that would be the case right. I know several stories like that. Here it's chaos, we proved it but if you run it a bit later a bit more that's what's happening right. If you move oscillations and it goes down and disappears it's extinction. So these are the things that I talk about when I mean long trains right. We have our system converging or settling down or something that looks like a stable indefinitely, longly, presumably whatever long class to do but then something changes and user changes abroad. So then now another example taken from a recent paper and again we clearly see the sort of the usual transient here. So we start with initial conditions somewhere here and then it goes down, soon enough converge to what looks like sustainable periodical installations, switch off the computer, 1500 is long enough, it's clearly sustainable periodical dynamics but if we want, if we can run it all the time, if we want to do that, we will see something completely different. So in another while we have convergence so this apparent and sustainable periodical installations will disappear and the system converges to other type of periodical installations right. And that is, I mean there is no analytical solution for this model but if you run it now for an order of magnitude longer it will still be like that. So that is really a real asymptotic. What we had before is some people call intermediate asymptotics, we call it long trance hints but that's a kind of phenomenon I'm going to talk about. And just another example, modeling example is that to example non-spatial and of course it can happen in space so this is, that would be the result from spatial explicit, three spaces plankton model and if you describe it in terms of some spatially averaged values what you see here is shown like that so at some early time between actually 100 and 200 the system is prepared and that's supposed to do something with climate and then you see that, I mean the perturbation will be somewhere close here right and then the system sort of adjusts itself to new values, apparently converges to some chaotic attracter but then if you run it a little longer that's what's happening. So that's in the nutshell what I'm going to talk about, this phenomenon long trance hints so something apparent is stable, both sustainable should I say, that can last for quite some long time, again long means to be more carefully defined but sooner or later what's happening is a relative fast or sudden transition to something else, to real asymptotics perhaps. Okay so that's models but then of course there is quite also quite a few examples similar dynamics observed in natural systems and two of them are shown here so this left flower middle data laboratory system which means strictly control of conditions so more changes and one thing I forgot to mention in previous examples so it's not, we obviously have a regime shift here in all these three examples right but it's not a tipping point, there is no tipping point, the parameters are not functions of time, nothing changes with time right so it's not, you will see that there is a sort of relationship point but it's not the same scenario, it's different. So then what we're having with data so that's where we can be more or less sure that again there's no tipping point as such because it's laboratory, it's a condition suffixed and uncontrollable and on the right we have data about fish population size, it's field data and then of course it's more difficult to be sure that the system is autonomous or not autonomous but at least in this paper it was stated that there was no any significant changes or trend in the environment so presumably again it's a kind of autonomous system but then so that's what's happening over a first time and there and here you see that some initial condition eventually settles down or something that looks like a steady state but then if you keep observing the system dynamics then what's happening is this, so apparent always change of what seemed to be like in this case a stable steady state, actually the system converges to this, right, suddenly changes to something else. So that's the sort of collection, my choice collection of examples that supposed to give their description of the phenomenon, what's happening, what we call long transient and so that's it all, I mean it's interesting by itself, what make it practically important is that obviously all these examples relate to regime shift. The regime shift are very important of course in the ecological context because it has a lot to do with predictability, with crisis anticipation, crisis prevention and all these things, right. So once again I won't emphasize the difference, this is not a tipping point, not a straightforward tipping point scenario because there's no verification as such, parameters are constant, the system just runs by itself, it's almost. Then of course the question arises how that can be possible because tipping point scenario regime shift, tipping point scenario is of course very well understood and we have heard a couple of very nice talks about that and again the people who contributed a lot to that are some of them are here, but this is different, right, how can that be possible. So now I proceed to overview of some simple or sometimes not so simple mathematical models that shows this sort of behavior and that will allow to introduce some sort of let's say classification or something like that, okay. So then simple case and quite intuitive is that rain sand introduced by a saddle point. Settle point is a steady state of course, right, it's not stable, we have attraction manifold, we have repulsion manifold, but if you start somewhere close to the saddle, the system will remain close for a very long time, okay, and if you start somewhere close to attraction manifold, so presumably somewhere here, that will be the real range of initial condition, then the flow first bring the system here, then it will hang around quite for a while, depending where you start, if you start very close to the manifold, it will be brought very close to the saddle, and then you will hang around, the system will hang around for a while before it leaves. And that's of course very easily to read mathematically, not in only two dimensional case, but in a general case, so when you arise, the solution then is written as a combination of exponents, then the leaving rate is given by the largest eigenvalue, okay, so this real part is not actually necessary here, but because we talk about the saddle, so it's actually just inverse, largest eigenvalue, and that will give you an estimate, right, an estimate of their duration or their lifetime or the transient, this quasi-state-state dynamics, so for how long the system can be hanging around approximately constantly. So that's obviously the case, but of course it can't possibly be very general because there are limitations, right, and the problem, in this case the main limitation is that we are very restricted in terms of the choice of initial conditions, right, we have to start somewhere very close either to the steady state or at least to the attracted manifold, if I start somewhere further away, the system will be taken, will never approach the saddle point closing out. Now this is essentially almost linear case, so slanted slightly nonlinear, in exactly linear, that would not cause straight lines, now they are not exactly straight, but if we consider something more interesting with some more interesting nonlinearity, what we can have, we can have a separatrix approaching closely to the saddle, and then what we can have from the range of the initial condition, the system will be channeled, channeled very close to the saddle point, right, and then it will stay there for a while, so the rest of the linear rate is going to be estimated for this, like it was, so the largest eigenvalue, but then of course it's much less restrictive. And how it can happen in practice, or in a more specific model, what does it mean, what kind of separatrix it can be, this curve S, the simplest example is of course Rosalind Reich-McCartney-Prey-Predator model, we have a linear cycle for intermediate values of K, it's born after homo-vocation, and it grows in size, so intermediate values of K it's not too large, and it's away from steady state zero and K, if you increase K, it grows in size, and it approaches very close to this saddle point, and the other state is taken away from the saddle point here, right, and that's exactly the type of, the situation that I showed previously, so we have S is now the linear cycle, and linear cycle is a separatrix of course, it divides the flow to two parts, so then we have the trajectory, let's say here, from the whole, actually from the whole phase play, they are channeled regardless of the initial condition, they are channeled to this narrow channel, eventually approaching steady state, and staying there for a long time. So then if you think about that in terms of the time series, so what we have is something like that, and this is a sketch by the way, so that's not exactly a simulation, if you do actually simulations, there are the steady state, this quasi-steady state dynamics here, it's more pronounced than this, that would be less, less idea to share, but this is the idea, right, so we have something that we can interpret, like quasi-steady state dynamics, and then of course I'm sure many people that have questions can go on, it's not a transcendent, it's just a linear cycle after all, right, but that involves the very important issue of the time scale of observation, as you can easily imagine that in many natural systems we will never go to here, right, for the scale of observation, we may start here, then the drop down, species becoming very low, density becoming very low, and that's all, and unless we understand that this is not a steady state, but quasi-steady state, we will never see that, but that's the trick, right, it's not a steady state, even that the system shows this quasi-steady state dynamics, okay, so one new element here is that in this case we don't have a single cell or two connected cells, and we will see more of this in a moment, and some disadvantage of this simple example to some extent disadvantage is that the cell point coincides with the origin, which means that transcendent actually means we have a very low population density, which is a bit dodgy in the sense that you can think, okay, whether it's actually extinction or not, because it's a deterministic system, of course if you had noise or whatever, this is not a necessary property of this long transcendent, right, because with the linear modification or the same pre-drug-pre-system you can put the cell, you can have another cell somewhere, this is just one example taken from a recent paper, and that's essentially the same dynamics, we have cell point here, we have the encyclical, we can have tracings, but now we're separated from zero, so that's one comment, and the other comment is that we can have more connected cells, especially in a high dimensional system, and that was in fact in a slightly different context, that was thought about as one kind of conceptual model of climate change, I'm not talking about that, but in the context of population dynamics, the simplest example we have connected cells is the three species competition systems of cyclic competition, right, so linear main model, and then you have connected cells, and then you have alternating long transients which actually increase the number here, okay, I'm not going to detail all that, it's an easy check, because we have something interesting to talk about, so in transients introduced by cell it's fine, but there can be something else, there can be other scenarios or other mechanisms, and the second one would be ghost attractor, which is called ghost attractor, so attractor, now what happens is that if we consider, let's say two species system, or two species community, then it's quite generic to have the two eyes of flies like that, right, like with different directional concavity, and then what you might have, this particular particular figure taken from this paper, it's a non-linear, two species non-linear competition model, you can think about our example, so there's also models of climate dynamics, we have dynamic like that, it's quite generic, so what we have here, we have a stable node here, we have a saddle point here, and then of course the direction of the flow is like that, it's nothing interesting here, so far, but it all depends on the parameters, of course, and then what might happen if you change parameter, the eyes of client starts moving away from each other, we have saddle mode defecation, they emerge, but there, I mean this is a local defecation, which means that the global structure of the field remains the same, so we have the flow that brings the system from almost all the plane, except for the sector here to this point, it's still a steady state, but if we now push the plant slightly away, we now have sort of empty space here, in the sense there is no steady state, but the flow is the same, which means that the system is brought to this channel, where the dynamics is very slow, so the system is sort of hanging around in nowhere, and that's where we have the long transient, and this sort of dynamics is often called the ghost factor, that's what's happening here, and a sketch, again it's a sketch, but just for convenience, really, you can easily get that in real simulations, what's happening is, so this blue line here would be the truest in the sense pre-brewfocation, we still have a steady state, so the system converges to it and remains there, if we are slightly pre-brewfocation, then what we have with the system is almost at a steady value for a long time, before dropping down to the actual factor, and now what we call long time, and what we call long against short, so now we are in a position to make a sort of a kind of definition, we call, or we can call a long transient, this type of dynamics, quasi-steady or quasi-sustainable dynamics, if by playing with the parameters, we can make this duration infinity to low, and this is kind of mathematical behavior, so we can use it and we can extend it to other systems, so this is all pretty simple in terms of the dynamics, is just sort of steady state, or steady state type dynamics, so they say, well then of course it can be, what makes it more interesting, that it can be much more complicated, it can be theoretical or can be even chaotic, and this is another example taken from an old paper, so based on Alan's old model and instigated by Kevin, some years and years ago, so we have in this three-special system, and number three is magic here because we must have a two-strength metococcal carous in a time-continuous system, so we have coexistence for some parameter range, we have coexistence of two attractors, actual carous and the encyclopedia, so there are two basins, they are separated, but if we change parameters, there is a verification when they merge, and chaotic attractor is not strictly speaking, is not attractive anymore, there is a channel, it's created a certain narrow channel, so that the system, after spending a while on the attractor, finally moves, leaves it and eventually approaches the steady limit cycle, so this is, I mean it's, it's, the scenario obviously similar to what we just have had in the simple case, so there are, chaotic attractor actually disappears, but the system behaves for a long time as still as chaotic, so that's a ghost of chaotic attractor, chaotic ghost, and in a slightly different aspect, the similarity, sometimes exactly the same dynamics is called also chaotic saddle, because we have actually their direction in the phase space or certain trajectories in the phase space through which the system can leave, right, it's a, it's a common similarity with the standard saddle point, okay, so there are, there are estimates of their tagline for the transient, and it can be, again, it can be done arbitrary alone, if we approach this verification failure, it goes to lifetime, eventually goes to infinity, so that's a, that's a, that's a classical example of chaotic ghost, I would say, and then that's coming back to one of the examples where I started, this is the chaotic ghost we have here in the plant model, just as an example, the system settles down from this apparently chaotic attractor, but it's actually not an attractor, so the system will eventually leave it after spending a while through this narrow channel, and then that's what's happening, so that's another example also to show that it's not a specific situation, but it's quite general, many systems in a certain parameter range, of course, okay, so then something else, so slow-fast systems has, that have already been mentioned quite a few times over these last few days, so only very briefly, so we have a small parameter, which means we have two different timescales, fast timescale and slow timescale, we can make a little tweak like introducing another rescale time, this term, and then the system effectively spreads into two parts, so you will see that one part of the dynamics happens along the isocline, so this is of course the isocline, system moves along the isocline, so this is the slow dynamics, and then the fast part of the dynamic is when we would approximate it constant, but you will be changing actually quite fast, yeah, so then a standard kind of standard example that comes to mind, and we have seen it a couple of times, is the predate-upre cycle, actually same as on the micro-culture model, but now the small coefficient, so then the limit cycle is now default, we have this fast part where the system shoots up in a small way, the slow isocline, then evolves very slowly here, and then again fast drop-down, and then evolves slowly here, so this is another type, can I have two fingers? I'm not smart. You are biased. So that's well known, but I mean it is one of the mechanisms that generates long-trades, and one thing I want to mention here is that usually people somehow the standard we are thinking is to expect this sort of periodical dynamics with these long-trades, but of course there is no, no, no, any limitations while it's necessarily periodical, and that's just going to show another example, this is competition model, right, where the second species would be much slower, and that's what's happening, we have invasion of species two to species one, let's say some way here, and then when timescales are on the same order we have just normal dynamics, so in this case species two is stronger than better, it will push out species one, so the system will converge to these species two only to the state, but if we have small parameter, when what's happening is completely different, species two, they invade, actually drops down first to very small values, and it's apparently extinction, yeah we are safe, so if we switch off observations here we are safe, but then what's happening is it shoots up again, and then eventually approaches to these species two only to the state to push out the resident species completely, okay, so now perhaps most important part, so what if any relation between the regime shape because of tipping points and long-trades, there is a relation, it's not straightforward, it's a bit tricky, so the order level will be here, so when the change in the parameter radius pushes the system over the cliff, but very slow, right, so it pushes over the cliff and eventually the change in the parameter is so slow it will stay there, or it stays there, yeah that's ghost attract, okay, it stays there and then past verification point, past tipping point we will have this very slow dynamics, that's what's happening, that's what I call ghost attract in this context, if the change is not too slow we don't have retic, we will have regime shape but no retic, no long-trades, okay, but on the other hand long-trades are like of course a lot of other possibilities, right, for long-trades, so set of ghost attractors will be discussed, term delay, it's less studied issue but we have an example of the beginning, so long-trades is can be generated by delay, they can be generated by noise or modified by noise, so a conclusion from here would be a pre-conclusion, let's say, that long-trades and provide an alternative scenario regime shift, which is basically different from the scenario of tipping point, so low there is a certain number of, so then just, I'm about to finish actually, I will have to skip special systems but I need to question, say, if people want to know some, so that's what I mentioned, so in higher dimensions in a more complex system we will have time delay, it can generate long-trades, so we will have to do scaling in this case, noise can modify trances, can modify like lifetime of trances but it also can introduce trances, in fact, if we have a bistable system, small but maybe not too small noise, will be pushing system between different basins and again it's, it's respectable because people can see, okay, just by stable dynamics but again it's a matter of the observation time scale, if we're somewhere here, we may never see, we may never see this time of jumping up to another, so that's what might happen and I have to, I have to skip special system, ask me any question, you will have 15 minutes for that, okay, it's a beautiful figure, so I just come to conclusion, first of all, perhaps most important is long trances in this sense when they're fast decay of initial condition changes to something that looks like asymptotic dynamics but it is not, they do happen, okay, so then we manage to identify a few mechanisms like that and then we're now in position to say what is long, long means by plane parameter values, by approaching the certain critical point, we can make the duration as long as possible, ultimately, and then from a slightly more practical aspect, long trances actually provide an alternative to another regime shift, I think I convinced you with that, so there's a couple of publications, so Science Paper published last year, it's unfortunately relatively short as always with Science Papers but much longer paper with more detail will, in under revision, after you have posted your review, so hopefully we'll be here later this year, so just thanks very much. Okay. Yes, and would you kindly tell us for maybe one minute about the spatial case? When we introduce space, so intuitively what makes spatial system different from this case, what are principally new phenomenally complex, pattern formation, terrarium waves and somewhere in the middle maybe this and this synchronization, this synchronization curves, then of course it depends how much information we have about spatial system, how we want to describe it, so this is one old example from Alan's old paper, single species models, space continuous time discrete, described by this equation, and then if you describe it by spatial average really what you have like that, that's the time series, and that's a kind of chaotic saddle dynamics, so this is not what show intermediate behavior, so two point cycle here, then chaos here, then again switching to something like multiple cycle or something, so that's a very nice extension, but effectively because we don't know anything about, not know any details about spatial distribution, it's effectively kind of non spatial, even of course space is important to generate this decay, right? Unfortunately to that paper you didn't tell anything about spatial distribution, so I have to just stick to anything about that, I guess it was some sort of spatial temporal chaos, what we call in the later literature, but now if we have actually information about the spatial distribution, then of course it's a much richer system, because what can happen, and that's a figure from my own old papers, is for instance we may have coexistence of two types of dynamics, and here what we have is coexistence of just periodical oscillations in time, so if you fix any positional space it will just be a little dynamics, with spatial temporal chaos in the middle, and what's happening is it's not stable, it says that chaotic domain will grow, but it will grow very slowly, that's exactly the case, you can play with the parameters where the speed or the spread or the propagation can be infinitely small, so which means that coexistence of these two phases can take whatever long time you want, so the scaling will be something like that, so now we have explicit impact of space, so the length of the system enters as a new parameter, so this is a relatively simple case, but this is genetics of the lifetime of trance-hands, actually it tends to increase the size of the system, so this is pretty generic in that sense, although more generally it can be some power law here, and we have another, I mean this is speed or the propagation of this propagated interface, but it's actually proportional to largest eigenvalue, so we have similar scaling as we have in spatial systems, and that's arguably we can call it a long trance, but in space with sufficient knowledge about spatial distribution, I mean it also involves the issue of sufficient amount of data, this is of course model, if we have something like that in real world, we'll be able to pick it up, because we would have sufficient number of centers, or observation points, so now I imagine we will have one observation point here, which is not unrealistic, that's how we can talk to, so the observer here would see for a long time periodical dynamics, before suddenly this interface will come here, and this periodical dynamics will suddenly change to chaotic dynamics, and this is in fact if you think about it in terms of this sort of non-spatial values, we have something similar to that, we have periodical dynamics in a given location in space, suddenly changing to curves, there will be no switchback in this case, but in the way it's similar, so then if we think about traveling ways more generally, then it also depends on the initial condition, if we think about it in terms of the invasion, and the invasion stages would be somewhere here, it starts propagating, and then of course it creates the invasion front, so that would be the president prey, and that would be invasion prey data, so then what you might have is a range of curious phenomena in fact, yes one of them is here, we call it dynamical stabilization, stabilization means that this looks like steady state, and it is a steady state and no spatial system, but the non-spatial system is unstable, so what we have here apparently convergence of stable steady state, which is funny, and then this, the length of this sort of latur here can be made very long, in fact I'm not quite sure whether it can be made infinitely long but quite long, and that of course it depends on the length of the system, and then again the same story, if we are somewhere here, the observer somewhere here, they would first see their damping installations, or successful invasion of their invader, system stabilizes at steady state, but then eventually that lasts for a long time before it actually moves up to the logical dynamics, and then this sort of dynamics can be seen in different systems, this is just a two dimensional, I mean that's a very beautiful figure, so you have this unstable plateau here, now into dimensions, still there is more to say about spatial systems, but that's the missing point that thank you Cindy for asking. Cindy for the quiz, we have two more minutes for questions. Yeah, actually I have a question, for example if you consider all these examples here in terms of phase transitions, and you can follow these long transients like metastable states, but is actually the time of this metastable phase, it's pretty long one, and it may be short one, it doesn't matter, but it looks like metastable state in the system, because thermodynamically it's unstable maybe, but kinetically it's a stable phase of the system. The question is, can we apply the phase transition theory, whether to understand this phenomenon rather than like dynamical system? I think we probably can. In a way that's exactly what we're doing here, but with different terminology, because this scaling clause, scaling clause are very much like scaling clause in phase transitions, so phase transitions are, I mean the universality there is that they describe as scaling clause, and the scaling clause we have here has more or less the same meaning, so even that I mean of course it's maybe not one to one correspondence, but I think it should be possible if one approach to another is possible. I think just you don't need to introduce like new definition like long transients, you can introduce metastable states, it's just very well known. I'm not sure, I think it's similar to this issue of relation to tipping points. So there is overlap, but I'm not sure it's one to one correspondence as I said. For phase transitions it's quite a special type of dynamics, in that sense quite a special type of dynamics of a physical system. And dynamical systems are obviously richer than physical systems, because with dynamical systems you must not have physical limitations, especially, no I'm not talking about conservation of energy like that, but we can have open systems, we can have, I think it's just richer in, so it's not one to one correspondence. I think that idea is sensible, but I don't think it's a particular word. I just wanted to add to the reasons of discussion in the sense that I think there is a deeper sort of relation between the particular, the early warning sign business and the sort of transient business. A few years ago we had actually a joint paper with Tilo, I just looked up the title, it's early warning signs for saddle escape transitions in complex networks, and there we sort of argued also that even for saddle escape you could use the philosophy that you used to predict or at least try to find warning signs for bifurcation induced transitions in the sense that you used the non-uniformity in phase space. So even you're a saddle there is sort of a non-uniform movement, and you could use that non-uniform movement to predict how long you are staying near the saddle. So there's sort of maybe this principle that I would add to this diagram that you had there, that sort of the non-uniformity in phase space would be something really important for middastability, transience, etc. So would you sort of agree with that or disagree? I would disagree with that. Sorry, that's common, thank you. And I mean we don't play in this classification of exhaust, it's supposed to put the basis of the future work, like the discussion, and it's probably not the same that you said, probably different, but one type of heterogeneity of the phase space I think is given when the psychedelics come in closer. It looks like heterogeneity, it's probably meant something else, but I certainly appreciate what you said and it's highly relevant, thank you. Okay, there are a number of additional questions and my love of lawn transience is fighting with my love of coffee. In attempt to keep us on schedule I think I'll cut off the big group discussion now so that those who need to use the facilities or get a snack or a drink can do so, but I'm sure that you would be happy to continue this discussion.
|
We will discuss the recent progress in understanding the properties of transient dynamics in complex ecological systems. Predicting long-term trends as well as sudden changes and regime shifts in ecosystems dynamics is a major issue for ecology as such changes often result in population collapse and extinctions. Analysis of population dynamics has traditionally been focused on their long-term, asymptotic behavior whilst largely disregarding the effect of transients. However, there is a growing understanding that in ecosystems the asymptotic behavior is rarely seen. A big new challenge for theoretical and empirical ecology is to understand the implications of long transients. It is believed that the identification of the corresponding mechanisms should substantially improve the quality of long-term forecasting and crisis anticipation. Although transient dynamics have received considerable attention in physical literature, research into ecological transients is in its infancy and systematic studies are lacking. This work aims to partially bridge this gap and facilitate further progress in quantitative analysis of long transients in ecology. By revisiting and examining a variety of mathematical models used in ecological applications as well as some empirical facts, we reveal main mechanisms leading to the emergence of long transients both is spatial and nonspatial systems.
|
10.5446/57612 (DOI)
|
So this is a joint work with the journalists in Benchy Junk, one of my PhD students, the University of Birmingham, UK. And the outline is that I would like to consider the monitoring control protocol as a data filter. And then I will try to deal with questions whether or not this data filter can deal with data that have measured there. So some examples where the filter works and some examples where the filter does not work. Okay, so special distributions are everywhere here. I just have two examples taken from our recent paper because I have read it to you, so pictures for them to show you. And we have here the gray field slot on the left, the special distribution of the gray field slot on the new zero-platform on the right. And if you are asked to summarize our experience with this and other special distributions, perhaps we can come up with a really simple diagram where we start with data collection or in many cases generation of our data from some reliable model. Then we have data processing and then we go on with the analysis of special distribution as a result. We have some feedback and we may wish to ask for a more specific data collection. I'm just going to go to full screen through your slides. Okay, thank you. And back to my simple example of the gray field slot. So the data collection in this case is done in the trapping procedure. So where traps are installed, somewhere in the farm field and the structure. And then we have trap counts, so how many slugs we have in each trap. And then we can convert those trap counts into a simple heat map. And if we know nothing about the gray field slug, then perhaps we can say, okay, look slugs for patches. So this perhaps is the most prominent feature of the special distribution. We can then ask for more detailed data collection to answer questions like, okay, what is the average pitch size, for example. So obviously, it's not always that simple. Here is another example, monitoring of the gypsum moss in the USA. So the gypsum moss is a very specious in the USA. It's a really severe forest defulator. So be there for one to have really careful monitoring control routine. And in this particular protocol about 80,000 traps have been installed across the entire USA territory. So obviously our data processing step must allow for data fragmentation. So if you want to have a more careful look at the data, the particular geographic region like it's shown here. And then again, we make some data processing and we can then make some suggestions about control measures. So maybe saying things like, okay, so we are interested in some domains where the population density is high and we can ignore the subdomains where the population density is not that high. So in reality, things are not that simple. Control measures are more sophisticated. They are nicely explained in the report. But what is important to us is that when we start talking about control, we actually can add another step into our diagram. So saying things like, okay, this data is important. This data is not so important. It means that we introduce some data filtering when we deal with our monitoring control routine. So if I go back to the gray field slug, just to demonstrate how this idea of data filtering works, let us think about targeted use of pesticide. So the gray field slug is one of the most important pests in the UK because the creature eats dog's walk. So wheat, potato. Control measures include application of pesticides. So the pesticides, slug pellets, are applied uniformly. So the entire farm field is covered in slug pellets. The government is not happy with the strategy because of severe water pollution. So they want to restrict the use of pesticide. Let us come up with a simple idea how to reduce the amount of pesticide. Let us say something like, okay, we are going to apply pesticide just in those subdomains where we detect the presence of slug. So from the data filtering viewpoint, that means we convert our original dataset into a simple presence absence map. So this is my original special distribution. This is what I have as a result of my data filtering. This approach fails because farmers are not going to apply pesticide over this suprematic shape. So they are going to cover the entire field on pesticide again. But what is important, my filter dataset is different from what I originally have. Now if I say, okay, let me come up with something smarter, I am going to apply pesticide just overpriced with the high slug density. And I would like to compare my strategy with the so-called economic threshold. So this is the most sophisticated approach. It involves identification of slug patches. Okay, then we analyze the population density within each page. Okay, and then we again we end up with we end up with a filtered dataset which looks different. So if we compare to the original special distribution, and the pesticide will now be applied just over these sediments. Okay, so let us keep it in mind. And let us now look at the data where we have measurement errors. So perhaps the simplest example possible, it's again about the gray fields slug, again about data collection at Shropshire in December. And I can tell you that Shropshire farm fields in December are not the best place for data collection. It's all about rain and wind and a lot of mud. So no surprise that one trip count was just completely missed from the dataset. And as a result, we have corrupted data. Okay, so it's classified as random measurement error. According to the guide, the case that is more interesting to us is that let us have our dataset. Let us reconstruct the special distribution. Okay, so over here we use trip counts from 100 trips. All data are accurate. Now we take the same dataset, but we reconstruct the special distribution using just 50 trips. As a result, we have something different, okay, despite the data still are accurate, but we just have a call, the sample grid calls as required to resolve all features of our special distribution. And it's obvious and when I have to deal with cases like this, I have to, if I still want to stick to my diagram, I have to add another block here, so which includes uncertainty analysis and hopeful elimination as I now deal with corrupted data. But if I go back to my monitoring control routine, then I actually notice that while my original data are affected by measurement errors, my filtered data may or may not be affected. So if for example, I want to analyze this distribution and this one from a monitoring control viewpoint, then I say, okay, if the slug density is low over the field, then actually it does not matter. So the location of patches here and here does not matter because the pesticide will not be applied at all. Okay, so my filtered dataset is then really very simple. I just have blank, blank special distribution. I'm not going to apply pesticide in these cases. So this is kind of extreme, of course, but that will give me some hope that if I use a filter in my monitoring control routine, in some cases, the filter can replace the uncertainty analysis and can be cheaper, faster, okay, so maybe more convenient. So my diagram now looks that I have corrupted data, but I skip that uncertainty analysis step because I hope that data filtering will help me with it. And then the crucial question, of course, is okay, so what is the data range where I can apply my filter without uncertainty analysis? So because I'm talking about strongly and slightly corrupted datasets, but what is strongly, what is slightly, we have to define it. So what we wanted to do next was to design some computational toolbox and to analyze data that come from models because models are better because we can generate as much data as we need and want, okay, and then we just simulate the entire monitoring control routine and I'm not going to report results, I just want to show you how this approach works. And that's my baseline problem. I would like to choose something no less that biological invasion just because I think the main reason is that we do have well-developed models. So we do trust in our data and when we see pictures like this, we say, okay, those are not just beautiful pictures. They do show us the spatial spread of invasive species. So we can deal with the data we trust in the data and send the next step, okay, let us define a filter. And I would like to stick to a really simple definition of a filter of the monitoring control routine. So I would like to distinguish between continuous front spatial distribution. So the well-known scenario of biological invasion when the invaded area is separated from the invaded area, so we have a closed boundary. And I would like to distinguish between distributions like this and this where we have a collection of separate pages. So formally, I have to define the number of objects in the image of my spatial distribution and the object is defined in turn as any area of the nonzero density with a closed boundary. So my example will look like in this case, I have just a single object, no matter what happens behind the front. In this case, I have more than one object. So not very realistic filter, okay. So again, in reality, things of course are much more sophisticated, but we believe that its useful control measures will be different if we deal with batch spatial distributions. Okay. And interestingly, so because we deal with the model, like I just said, we can generate as much data as we like and we can generate actually special distributions that look very similar to each other, but some of them are fetched, some of them are continuous front and therefore we want to see what happens if we just corrupt it, just talk about data set slightly, looks like they will be easily transformed to each other. So we can distinguish between them. Okay. So how are we going to corrupt our data sets? Two possible ways are to pretend that our telepore exposition time is shorter than required, in which case we just replace low values of the population density by zero, okay, in our data set. And another way, and again, we are more interested in this scenario, is that we pretend that our sampling grid is causes and required. So we lose information about our spatial distribution. And let us see whether or not we still can recognize batch spatial patterns then. Okay. So the first thing is to replace low values of the low density with zero, okay. And what is happening here, we have a continuous front spatial distribution, okay. So it's a single object formally in my filter. I start with the low value of the threshold density. And then I just increase, so basically I just cut off all values along this scale, that is below this or the next value. And I just increase this threshold density. And finally, my continuous front breaks into five separate patches. And this is the range of data where my filter works well. So good or bad, okay. Actually, this is incredibly good because my special pattern is very robust. This is a really large volume. Okay, then I can proceed. So learn how the filter depends on parameters. Let me just skip it. And let me show the next way to corrupt data. We say, okay, this is our spatial distribution reconstructed over a very fine grid. Okay. So we have a lot of information about the spatial pattern. The next is not so fine grid, reasonably close, okay. And finally, we have extremely close grid, so something ugly looking, but formally, we now have transition from collection of patches to something that is classified under my filter framework as a continuous front. Okay. And if I want to figure out what is the data range in this case, it's shown here. So we have the number of grid points on my sampling grid. And then I just make this is a logarithmic scale. I just make it smaller, smaller and smaller. And finally, so my filter does not work. But again, so the range is actually huge. So how can I interpret this result? I lose a plenty of information about the spatial distribution. But my filter, if I'm happy with the definition of my filter, that my filter is still, is still okay. So it works. So finally, let me show the case when my filter doesn't work, no matter how accurate it is. And the case is model specific. So I have to disclose the model. Again, I'm not interested in the details. So the model has been previously studied in the paper by Djankoš Petrovski. They believe it's a reliable model. So they believe it's relevant to biological invasion. So in the case of the gypsy most population, what is important to us is that the topology of spatial distribution, so we are interested in function, you and the temporary moment of time. Okay. So the topology of spatial distribution depends on parameter M in the problem. Okay. Let us keep it in mind. And another important observation is, of course, that we solve this problem numerically. Okay. We start from some initial conditions. The initial conditions are given by a continuous front. So it's very common, of course, for biological invasion. So my population is defined in this initial condition in the very local subdomain. Okay. So formally, I have just one object for my analysis. Okay. And because of the initial condition, if my spatial distribution is going to evolve into some patchy pattern, I have to wait for some time. Okay. Over the transition time, occasional switches between continuous front and patchy may help. So and here we have some examples. What we have here is a more formal description of this situation. So we start from the continuous front. This is the number of objects. Okay. And then we see those occasional switches between patch and continuous. So the number of objects is either one or greater than one. And then after transition time, okay, so it's over here. We always have a continuous front spatial distribution. Again, the initial condition, then some occasional switches, and then we have patchy special distribution. Now, let me record time that is required for the invasive species to spread over the entire domain of interest. So I select a rather large domain. Okay. And then I just show this time as a function of parameter M. Okay. And this is the red curve. I also want to detect the transition time as a function of the same parameter M. So as you can see, in many cases, this transition time is much shorter. So I start with my initial condition, then my set up all of my special distribution gets established over the transition time. And I can analyze it and apply my filter. However, we have rather narrow range of M, but we do have it where the transition time is actually much longer than the time required for the invasive species to spread over the entire domain of interest. So when it happens, we cannot apply control measures. So it's too late. And actually, if my special distribution is controlled by the value of M within this range, then I just simply cannot apply my filter because I'm somewhere over here. I don't know. So my invasive species is spreading. Okay. I try to apply my filter to figure out is a patch or continuous, but I cannot say anything. No matter how much data I have, no matter how accurate my data are, I just, my filter is just useless. So we have to understand the nature of this phenomenon before we can do something about our filter definition. Okay. So in conclusion, we believe that the idea when we want to consider our neutral control protocol as a data filter is useful. So maybe useful, especially if we can define the data range, so where the filter works, the applicability parts. And our whole is actually that it's useful because it may help us with minimization of data connection. So it's very often that practitioners use core sampling grids. They want to make them even closer. And we have to understand the consequences of such decisions. So perhaps this approach may help us with this task. Okay. So thank you. So I'm thinking about your last example where you may not know what the ultimate spatial pattern is going to be until after. And I guess I wonder if, I wonder whether it's as bad as you think it is. Because I, for the purposes of control, if I'm going to go out and measure these things and then within a week or a month, I'm going to attempt, I'm going to have to apply my control procedures. I might be misunderstanding, but it feels like you're saying, I don't know what the ultimate spatial pattern is going to be. So I can't apply my filter, so I can't do control. But I guess I don't know why you care what the ultimate spatial pattern is going to be if you're doing, if you're going to be doing control in the near future. Like if it looks patchy right now, I should apply control in whatever way I would apply control to a patchy system, even though it might suddenly collapse into a wave front two years. Right. So actually, the message here is not about that particular filter. Like I said, the filter itself is perhaps not quite realistic. So if you think about biological invasion, they want to be very careful and apply much more complex control measures. The message is perhaps we now have a new player in the game. So the results are not just problem specific. They are filter specific. Okay. And we have to be very careful. So we, because application of the monitoring control protocol, okay, may require us to make some extra study into the problem before we can say, okay, so that's great. We can proceed with this particular type of the control routine. So if you think in more realistic terms, maybe so, maybe we cannot be so sure about control measures because the first thing that comes to the mind is okay. Maybe in the case of patchy distributions, we want to eradicate patches so classified by what? I see they are close to, so I see they have say the high population density within patches. Or maybe they are close to that imaginary continuous front line so that separates somehow non invaded, clearly non invaded areas from invaded ones. And this can be really different. And perhaps if you know that finally it is going to evolve into continuous front, then yes, so our control protocol is different. And also we have to think about timescale because you just said, okay, it's over two years for example, but what we could see in the model again, it's model specific result, but those occasional switches are much faster actually. And so we again, we have to take it into account. So I'm not offering right now something reliable, okay, but what just we have to take it into account. I want to ask a shamelessly self interested question. Are there timeseries for this spatial slope data? Yes, we do have timeseries for this work data. For instance, how are they useful in the monitoring and control routine? Could be the next question. And the answer is from our perspective, if slug patches are stable, okay, then it's absolutely fantastic. So if they are not stable, I mean, so they just transform themselves in this time and the first thing is that they appear different places, then it's not so useful. But we could make a short term prediction of where they're going to be next. It would be a lot easier to control them. Again, what is the short term prediction? We have to define the timescale first. So, yes. Well, yeah, maybe we're just lazy. We have not analyzed it from this viewpoint as a problem, but we do have timeseries. Okay. Questions? Okay. Well, thank you. Thank you. Of course. People reminded me I have an auto reply in my email that says my laptop isn't working. That's actually not true anymore. They did fix it. So I can respond to emails. So if you are sending me papers where Mathematics has had an impact on management, please keep doing so. I am getting a good answer. Thank you. That's also a reminder. Yes. And then we're going to finish right now and we'll come back. At 1.30, we'll very quickly just then break up into the discussion groups. And this is the, I guess, the last time that the folks, formal discussion groups will be meeting and to finish before the talking break.
|
In many ecological problems spatial data are collected to satisfy the requirement that the population spatial distributions can be reconstructed with high accuracy. The situation, however, may be different when reconstruction of spatial distributions is required in the context of monitoring and control (M&C) protocol. In my talk I will argue that the M&C protocol can be thought of as a data filter as its application transforms the original dataset and it often results in a spatial distribution with essentially different properties. That transformation may, in turn, alleviate negative impact of uncertainty on the accuracy of results when spatial distributions are reconstructed from data with measurement errors. While original data are affected by the measurement errors, the filtered data may or may not be affected depending on the filter definition. In some cases, there is no need to ask for more accurate data collection as measurement errors will be `eliminated' by application of the M&C protocol. Meanwhile, it will also be shown in the talk that if inherent uncertainty presents in the model, the M&C data filter may become useless, no matter how accurate the data are. This is a joint work with John Ellis and Wenxin Zhang.
|
10.5446/57613 (DOI)
|
Before I get started on the actual talk, I've had lots of people asking me how steep it was yesterday. This is a picture showing how steep it was. While I'm at it, here's a picture of a very steep slope. You can see the kind of surface we were dealing with. This is looking up towards Ben, who's up above me. The summit we were trying to get to was up here somewhere about 350 meters higher than where he is right now. Anyway, there's a fabulous hike. I highly recommend it if you're interested. Now you know how steep it is. I kind of like to walk everybody's top, right? You just tilt the camera. That's right. So this talk is not one I've ever given before. It arrived as an assignment from Andrew. He sent me an email and said, would you please give a talk on ecological models and data? I wrote him back and I said, are you sure you want me to do that? I don't know anything about statistics. He said, well, I don't really want a statistical talk. So I thought, okay, so what have I done with data? So I started thinking about my models from the point of view of the data. And I realized I could rank them in terms of the amount of data I used. And so that's the approach I decided to take for this talk. I've chosen four projects which range from using a lot of data up at the top and then not very much down at the bottom. And what I want to do is I want to look at each of these projects from a fairly high level perspective, just looking at the data I had, how I used it, and then what impact the project had. And I want to talk about impact because it seems to me the only reason we bother using data is because we want to have some sort of impact on the ground. Otherwise the data really is kind of a pain. So with that framework, we can get started. So the first project, this is the one that uses a lot of data, was looking at human-bear interaction, human-bear interaction management in Whistler. And Whistler is a resort town like Banff in Whistler which is near Vancouver in the mountains there. And the question was, which management options will have the greatest effect on reducing human-bear interactions in Whistler? And management options include things like deterrence. So you might have a policy where if a bear is sighted, such and such behavior is used, maybe rubber bullets, maybe something else, to get the bear to move away. Another management option would be to, and this is one of the ones we were looking at, would be to sequester all attractants in bear-proof containers. So if you've been looking around the area here, you've seen large metal bins that garbage goes in and the bears can't get in there. They've got a special handle that the bears can't use. Those are bear-proof, so even though there are attractants inside, the bears aren't interested because they can't get at it. Those are designed for public spaces, but there are a couple of companies in Canada building bins for residential use. So if you're a cottager and you don't want the bears getting access to your garbage and you're worried about them getting into your garage, which they can do on occasion, then you might want to purchase one of these bins. You attach it to a concrete slab and your garbage can go in there. So this is another management option. Now if you were Whistler and you were looking at implementing some of these options, you'd want to know how many bins do we need to get? Do we need the whole city to have bins or can we just cover some fraction of the city? If it's a fraction, what should be the spatial distribution of those bins? This is the kind of thing we were looking at. Do bins buy a loan work or do I need to combine it with detergents, for example? The data we had was GPS data from nine collared bears over a few seasons. We had vegetation maps and that's what we're looking at here. Red is Whistler, that's the urban area. The light green is the downhill skiing runs and then the rest of the greens are forest, with the white stuff being glaciers. So we had detailed vegetation maps and we had urban zoning maps. So a residential area, for example, would have one kind of garbage in some certain volume, whereas a downtown area with restaurants in it would have a different level of attractance. So with all of that data, so the GPS data from the bears gave us raw movement information about how the bears move and then when we combined that with the vegetation and urban maps we could get detailed information about the kind of patches the bears would choose to hang out in and move through. So with all of that data we built an agent-based model. So this is a big computational model where every movement decision made by the bear and the model was informed by the data. When we were done with that we were able to evaluate the different management strategies. So you don't need to know what they are, but it's a list of the different management strategies we looked at and then the horizontal scale tells you how much impact that strategy had in reducing human-bear interactions. So the best strategies are up at the top and the ones that are so good are at the bottom. So what was the impact of this project? For the most part it was a proof of concept exercise. We did everything we could to maximize the impact. So we used all the data that was available that maybe that we could find. We worked closely with people in Whistler, people with decision-making ability and we also worked with the industry, one of them that builds these bear-proof bins. So for those of you in Canada it was partially funded by an engaged grant. So we had all these stakeholders involved and a very detailed model. But I would say that really it was a proof of concept exercise that in this context you can build a model to ask questions about management activities. For more impact it's going to take time and future work I think by other people. We didn't get as far as costs. No. And that would be an important step. So the second project I want to talk about is looking at the dispersal of transgenic pollen. The question here was how far will be transported transgenic pollen disperse? And the particular context was a GMO apple. So I was working with David Lane who's a researcher at the Pacific Agri-Food Research Station in the Okanagan and he had developed a non-browning apple which means that when you cut it open it doesn't brown. So it becomes more like a strawberry or a melon than a pear or the fruits that were used to going brown. We can talk about the ethics of all that later. So he had developed this apple and was working with a company in the Okanagan that was interested in having this apple approved for planting in Canada. So they approached me and wanted an answer to this question. And in order to do this I had to figure out how far a honey bee would disperse. So I had to model bee movement and then from there I was able to model pollen movement. The data I had was not as much as the first project but it wasn't bad actually. I had from one particular field study that David Lane did over two years I had percent transgenic seed as a function of distance. So in a particular orchard arrangement he had a row of trees containing a transgene and then a whole bunch of other trees nearby and over two years gathered seeds at various distances from the transgenic row and obtained this data. The data is the point. The curve through here is what he got by sticking it in Excel and asking what the fit is. And of course Excel gives you exponential decay with a constant. And of course the constant is not good if you want something like this approved because it says that no matter how far you go you still have some likelihood of finding that transgene. So we had this data and then we also found data by Bill Morris from 1993 where he looked at the movement of honey bees in an array of canola plants. So the data is the vertical bars and he gathered it at several times. We have this dispersal kernel for the honey bees. And then we had other papers where we could get things like flight speed for the bees. So with that data I was able to build a diffusion based model. I haven't shown you the equations because this turns out to be quite long but I have them later if you want to look at them. So this is Bill Morris's data again in the bars. He looked at modeling dispersal as well and he did a diffusion model. So if you look at the dashed lines that's what you get if you assume the bees only diffuse. If you add a little advection to the bee movement you get the dash dot line which is this one. My model was diffusion and advection but in two separate subpopulations. So instead of one population it was diffusing and advecting. I divided the bee population into two groups. One group that was I called it harvesting. So they're actually right in the flowers gathering nectar and pollen. And then a group that's hopping from flower to flower and they were the scouts. And so if you take these two populations that are doing two different things and allow switching between them then we had my model and I got this curve. And we had enough data that we could do an akaki information criterion fit to make sure that the extra parameters so because I had those I'd split the populations I now had to have switching terms. Those are parameters that the other models didn't require. So my extra degrees of freedom got me a better fit and we were able to use a kaki information criterion to decide that actually that was a good trade off. So we were able to validate the bee movement model. Once I had bee movement I could then get pollen movement. In order to go from bee movement to pollen movement all I had to do I just assumed that the pollen is attached to the bees. All I had to do was add source terms for the pollen so as flowers open up I get new pollen. And then I added a decay term which was deposition so the pollen that's attached to those bees gets deposited onto other flowers and that's the pollen that gives me seeds. Because my dad had seeds right it wasn't pollen. So adding those two extra bits so I have my movement model all I do is add a source term and then a decay term to get my stationary pollen and then we were able to get the percent transgenic pollen as a function of distance. So we've got our transgenic sources here and then in the experiment they had several rows of apple trees right next to the transgenic source and then there's a gap and then more trees, so it was a pretty hard job to get more trees further away. And this fit here, this is the output of our model. The only parameter left is that deposition rate from the bees, everything else we got from the other model and the fit's really really good. So we were pretty excited when we saw that. Alright so we've got a model that explains that data. Now that we've got a mechanistic model we can predict what happens in other orchard leaves. So if instead of just one row of transgenic trees you have a whole transgenic orchard and then next door you've got either the trap plant as a single tree by itself or you've got a conventional orchard of some size, some distance away. And the question is how much transgenic we're going to find in these other plantings. So if all I've got is I've got a big desert, I've got my transgenic orchard and one receptive conventional tree, that tree has to be almost 2,000 feet away in order to have less than.9% transgenic presence. So that's the maximum that's allowed. So suppose my trap tree was, I wanted all my apples to be called organic then I would need to be below this.9% or even if I wanted to just call it conventional I'd need to be below this. Above that I'd have to label it as transgenic. The.1% out crossing just to go ahead. If on the other hand instead of a trap plant I've got a conventional orchard next door, if I want the transgene present in that orchard to be below.9% I only have to be 60 feet away. Because now in that green conventional orchard I've got a whole lot of conventional pollen competing with the transgenic pollen and so this is not going to be as much showing up. If my conventional orchard is twice the size then I only have to be 30 feet away. So this was the kind of thing, kind of prediction we could make with our model once we had it finished. And in fact the paper was included in the application to have the Arctic apple approved. The Arctic apple was approved. I don't know how important my paper was but I know it was in the application. So that one did have on the ground impact. And I hope you're not mad at me if you don't like GMOs. Okay, don't tell my mother I did that. This next model, so now we're moving down the data scale, I was looking at the links in snowshoe hair predator prey system and I had a collaborator or colleague at the University of British Columbia who said that all the models were wrong. And I was a little surprised. And apparently according to her they were wrong because they didn't get the hair minima right. There were lots of good models out there that could capture the amplitude and the period of the hair links cycle but not the minimum levels that are seen in the hair population. And the minimum is really important because the hair is a keystone species. So if there are half as many hairs as the model predicts that really matters to everything that's trying to survive off the hairs. So we were interested in trying to figure out what drives the hair minima to such low levels. And our hypothesis was that it might have something to do with the fact that there's actually more than one specialist predator on the hair. The links is the one everybody knows about but the coyote and the great horned owl are also important. So the data that we had, so we had the study from the Clouane, there's a whole book, it's a wonderful book if you want to work on the boreal forest ecosystem. That was a 10 year study which meant we had one data point per year. So we had, from that study we could get parameter values, we also had functional responses which they had plotted themselves for each of those predators. And then the system's been well studied by many people and so we had lots of other papers that we could look at to try and get a sense of what the range was on the parameter values and what the range is on the cycle properties. Now we didn't have a good time series. We had one 10 year cycle from this Clouane study but beyond that we just have snippets from other studies and so we had a good handle on what the period, the predator prey lag is, the lag between the peaks of the prey and the predator for each predator, what the maxima and minima are for each species included in the model but we didn't have a good time series that we could look to. Okay so our model, I've actually got equations here this time, our model is we use the Lesley Galer-Mae formulation. So the first equation is your logistic growth in your type 2 hauling predation where we have up to 3 predators depending on how many we include and then specialist predator we have an equation for each predator and this is a logistic growth of the predator where the predator carrying capacity depends on how much prey there is around. So we were interested in understanding if we could get, match all of these cycle properties using one predator, two predators or three. What was the minimum number we needed in order to match all of these cycle probes? When we did multiple predators we used, or when we did a single predator for example, we did a weighted combination of either the lynx and coyote or the lynx and great horned owl or all three. So we were trying to find a sensible way to figure out what our parameter ranges were sticking with these three predators. Okay so because we don't have a time series we've got all these cycle probes that we're trying to fit. So we have the hair density, maximum and minimum, the predator and it was either a predator complex or it was these individual predators maximum and minimum and then the leg between the piece and the period. The basic model is a single predator model where in this first column all our parameters were lynx parameters. In this column they were a weighted combination of lynx and coyote, here a weighted combination of lynx and great horned owl, here a weighted combination of all three. But the two predator models, one of them was always lynx and the other one was either coyote or great horned owl and then of course the third model we had all three. And so all we've got is a check mark, did it fit or didn't it? It turns out the period is easy to fit, the leg is easy to fit, but the max and min are a different story altogether. We almost got it if we used a weighted combination of all three predators, we were almost able to fit all the probes but we couldn't get the hair minima, the hair minima which is the important one. And so it wasn't until we included all three predators that we were able to do that. So what was the impact of this model? Mostly it was a theoretical study, we were hoping, so at the very start we thought well if we can get the right model maybe we can figure out just exactly how much generalist predation there is going on up in the north. We weren't able to do that, we didn't have enough data, but we were able to use the model to try and understand how the different predators were affecting the cycle. So I want you to ignore the first column for now. The second one, what we're looking at in the figures on the right hand side is we're looking at the effect, the proportional effect of each predator. So what I'm plotting is the proportion of hairs killed that are due to each predator. So if we look at the right here it's saying that when hair density is high most of the dead hairs are removed by lengths. And then the coyotes down here and grey-horned owls aren't taking very many from the system. This is two different parameter sets so within this column here we found several parameter sets actually that matched all of the cycle pros. So I'm showing you two that gave very different looking results. So if we look at these the lengths contribution is declining as hair density is declining. The coyote contribution stays relatively flat until hair density gets quite low and then it drops. The grey-horned owl on the other hand is going up the entire time. So as hair density gets lower grey-horned owl relation becomes really important. For the other parameters that we have the same pattern even though the shapes look different. Coyote is level and then dropping, lengths is dropping all the time, grey-horned owl is increasing, and this hair density decreases. So what we understood from that is that the avian predators are really, really important for driving the prey down to low levels. And this was actually hypothesized by ecologists and other papers so ours was a quantitative contribution of this mission. Okay, so mostly a theoretical paper there. How am I doing for time? You just got to talk a couple minutes. That's all I've got. Okay, I think I've got time for this one then. So this last project I'm looking also at the hairs but instead I'm focusing on the grey-horned owl. This is a colleague of mine who thinks it's wonderful for this. So looking at the grey-horned owl and hair system and if you look more closely at that QANY data they actually gathered it in two seasons instead of just winter which is often one of the field study seasons for relation. If you look at all seasons the grey-horned owl is clearly a specialist predator. That's a type two colony. But if you separate the data by winter and summer and the winter it's still a specialist but in the summer it looks a lot more like a generalist. So we've got this big change in behavior. So this is the data we had. So for the research question in this case was if the predator functional response is generalist in one season and specialist in the other what behaviors do we find in the predator crisis? So the data we had again was from this QANY study so we had some parameter values for growth rates and death rates. We had the functional responses that just showed you and then a few other papers where we could lean some notion of what parameter values were. But I would say this is the least amount of data in all the studies that existed. Here are our model equations. We actually had a model where we switched from one season to the other but the average model worked really well. So this is a weighted sum each equation. This is my summer prey equation multiplied by the length of the fraction of the year that's summer and this is my winter prey equation multiplied by the fraction of the year that's winter. Same for the predator. So our results then were looking at the behavior of the system. We found the more traditional behavior that you expect from these predator prey models where we have a steady state and then bifurcation, you get limit cycles but then over here it goes back to having a steady state again whereas your classical model models don't. In cases where I forgot to point out that we allowed the predator to have, because the owl is eating lots of other things, so we assume that it's got other food as well as the hairs and if that other food is enough it doesn't require hairs full time we get these new bifurcation diagrams. So in this one we found extinction. We could get prey extinction and predator survival and here we have bistability with nested limit cycles. There are all sorts of interesting behaviors that don't appear in the classical canon for the standard models. So what was the impact of this? It was a fun study theoretically. We showed that averaging was successful and we've got this call for researchers to please gather data that's seasonal because it makes a difference. And so in the best of all possible worlds I would hope that we can see more of this kind of data being gathered. It's two early days to know if that's going to happen but that's where that project is at. Okay so thinking in general terms, looking back on these projects and I was trying to think why was the success, I would call this success, so for rating success as I said in the beginning in terms of on the ground impact it's quite different across those models. So the models have, so if I think about this as a topic space, things that my data can tell me about and things that my model can tell me about, they don't necessarily line up. They can be different. And so the overlap of the model and data and the amount of data are two different axes that I might want to think about. And I'm going to have high success if I've got good overlap and enough data. Maybe if I've got enough data I don't need to be overlap to be amazing but fairly up here. But then as my amount of data and the overlap go down I'm going to be decreasing the success of the model. That's only part of the picture of course but it's what jumped out at me after I looked at all of these models. I need to thank all the people who worked with me. There were lots of different collaborators. I couldn't find pictures of all the individual collaborators so that's just a picture of my students hanging out with me. And I think that's my last slide so thank you very much. So yeah, we're really appreciated to talk. I'm happy to hear about some data. So a couple of questions. I was kind of about, I came up from the side of the first story. You tell that you were able to estimate like everything data can you tell what data you have? And what the parameters are and stuff. And then the second one is on this table where you had the distance. It looks like there's a very sharp decrease in 0.9 but no decrease, not more than 0.1. I was wondering if you have intuition why that happened. So you're at the 0.1% and the 0.9%? Yeah. Oh, okay. That one. So I'll do that one first because I think that's probably easiest. Yeah. So that's, yeah, the drop's smaller. I mean you have to go so far away at 0.1%. Why is there like a certain? Yeah, it drops a lot faster with 0.9%. I'm not sure I have a good answer for that except that of course once you get a long, long way out it's taking forever for that tail to decrease. I mean the exponential rate's the same but a rate on a very small number is a very small decrease. So I would assume it has something to do with that. For your other question, okay, so it was my students who did most of the nitty gritty work on figuring out movement but with just the raw color data you can compute turning angles and you can compute step sizes and you can compute the rates at which those things happen. So all of that was done. And then you overlay that information on these vegetation maps and try to figure out which patches, which type of patch they're visiting more often. For example, before, so right now they're saying it's berry season. So there's a big difference between what berries do in pre-berry and berry. So in pre-berry they'd be hanging out in these green areas, they're eating grass. And during berry they'd be hanging out well in the valley bottoms. That's where a lot of the berries are in the later season. So we put those two pieces of information together and we wanted to figure out which type of vegetation patch where the bears occupying visiting more often and then which type they would go through. Were there any areas that they were avoiding, for example? So there wasn't much. You wouldn't find much up here. Did that answer your question? I was just saying you have a little preservation, the garbage, no garbage, something like that. So there was a parameter that relates to those perturbations. How do those work? Okay, so those had to be assumptions, right? Yeah. So we, we had a lot of information about the bears that might visit that site but would get no food reward. So that was part of the model as well. It's whether or not they got a food reward from visiting a site. So there'd be food rewards in vegetation sites that had berries, for example. There'd be a food reward and there'd be a food reward in an urban site where they managed to access the garbage. And then, you know, so we, we had a scale that was based roughly on the conservation officer, you know, they had, they had a scale for, for how a bear behaves and what it does and what their response needs to be. So we use that. But that part of the model, we didn't have a lot of time for. Okay. So you've, you've raised a very interesting issue by some of your comments, like about the hairs and what you were trying to fit. The question of how you should measure the, the some sense, the fit of a model to an ecological phenomenon is not all clear cut mathematically because, I mean, you could say in some cases, how do you think we should start approaching that? I mean, can you give some general ideas and it's just, just a little comment as much as a question. Yeah, no. As you said, once you said, okay, here's AIC, we can do that. That's not always the right thing. No. You got something. Yeah. I don't know when AIC is wrong. I haven't, I haven't used it enough to think it back up. No, but it may be, I mean, you might say, well, the hair, you might have had something which was a better fit in a statistical sense, but didn't capture the piece of it. Right. That you really wanted. Right. I think it's a, it's a, it's an interesting and tough question. Yeah. Yeah. And we didn't have enough data to try the AIC. It would have been, that would have been an interesting conundrum if we ended up with it, you know. It may not be right. Yeah. The odds of picking out the best model for the question that's most important. Yeah. And, and I wouldn't say that the model we ended up with was a, was an accurate model of the link's hair coyote great point dialysis system. But it did lead to some insights. And so that the data at least guided us to it, something which seemed to make sense and seemed to confirm what other researchers were looking at. I don't know about general comments. I think, I think it's something to be aware of basically that, that the way to evaluate the model against the data will depend on the kind of data. What's software used for AIC in the model? We started in NetLogo and think ended up with Matlab. Matlab's clunky, but it's better at the calculations. Better than a logo. Yeah. NetLogo's good for, for just the agent-based moving things around. But if you want to calculate things, it can be, they were finding it difficult. It's not like finding a solution could be that. You can just, you can, there are lots of things you can do. But simple things like if you wanted to do a weighted, so, so you want to, you want to look at all the cells that the bear could move into and give each one a value and then create your cumulative probability distribution and select one, that turned it to be really hard in NetLogo. Excellent. I have a question about how to write the logistic equation. You go to your slide 12. In the second grid of equation. I would, I would do it differently. You put your, you pray as a counter-respond, the linear counter-responds, the linear term and then keep it. Do you have a reason for doing it like that? You'd be more, you're more used to this one. Yes. Where you've got, okay, so you've got, this is the generalist predation and then it shows up here. Yeah, then you can put it, put it, not the self-competition term in addition to the, the equation. Why do you do it like this? So. I'm wondering about the less than an hour main model for years. Yeah. So if somebody has a good answer to that. I thought you were raising your hand to give us the question. No. Okay, so this is, this is more the Rosentime and Chrysler framework that people are very familiar with. Yeah, but this question doesn't come up because the, the nonlinear density phase of the predation is not good. Yeah, I'm, I'm going to answer your question. I'm getting there. I promise. So this is called a biomass accounting model. Yeah. Where you, you remove prey from one equation and you, they turn into predators in this equation. The, the Leslie Gower May model is, is called a laissez faire model is what that's called. So there, there are other models in this category where instead of, of, of removing them here and plugging them back in here, I am saying there's a carrying capacity that is, that is affected by the prey that are around. And I chose that because I was, this was pretty early in my research career and I came across paper by Hensky and, and Corporal Mackie where they were looking at the vol system and this is what they used. And they were also interested in, in questions similar to ours. And so I just, I just wanted to use that. I was lucky because cycles are a lot easier to get in this model with multiple predators than Rosenstein MacArthur. And since like MacArthur you can't get, or it's really hard to get cycles with more than one predator. Yeah. The other issues have a few Rosenstein MacArthur without self-limitations. Right. They're specialists so they couldn't coexist unless you're, unless you're, unless you're in the finalist, coexistence. Yeah. Although I was wondering, are they real specions, these predators? I have a hard time envisioning they're just eating snow hairs. No, they are eating other things. The lynx can't survive without the snowshoe hair. Other two probably could. But up in the boreal forest there isn't a whole lot else to eat in the wintertime. Lots of stuff is under the snow. And so they do rely on the hair-preeding government. So I'm not going to argue that this model is better than the other one or that this model is one of them's wrong and one of them's right. But there are these behaviors that we find out in the wild. For example, cycles of all of these predators that the Rosenstein MacArthur can't produce. So there are some failings of the Rosenstein MacArthur model and there are some things about this model that makes sense. I don't know. I just find the model that there's no, I like to have something that has a mechanism already. Yeah, so Church's explanation for this is that this means territory sizes are getting bigger. That's how we eat. Yeah, so. That's very strange. It's a good one to me. I just haven't gotten a convincing argument for it. Yeah. Okay, I don't know if you keep me talking about this. Yeah, I'm sorry. I've heard a little question, but in your seasonal specialist you said, okay, if you average the functional response you get something that looks like a specialist. But then you said you can use the average model as a response of an incident. I know. Yeah, that's interesting. Okay, so just looking at the data, it looks like that's what you get. You have a specialist. But if we do this average model, actually this picture is a nice one. So this is my summer season getting longer. Okay, and the wiggles are because I've got my summer model and my winter model. My summer model and my winter model. So the population is wiggling as it follows sort of the average trend. So if we look at the top here, this is wiggling towards a steady state. And it looks just like a Rosenzegmer-Carther null-client system. So those are the two curves in the null-client. Now as my summer season gets longer, what's happening is that summer generalist equation, this one, is becoming more important. And so the null-client there, the pre-null-client starts changing shape. It's getting flatter at the top. And as the summer gets even longer, it develops that peak in it, which is much closer to the generalist model. So if we just used the average functional response, which looks like a specialist, we wouldn't get this summer generalist behavior. But with the average model we do. I saw from your equation that you have the average in the two terms. We are. Yeah, we're putting it together. But the time depends on the order. I mean, if you have two terms, it's a combination, right? Yeah, it's a weighted sum. So you could write the one functional response, that's the average one. I think his point is that he would use a different time interval. That's not what you're doing. We ran both. Yeah, that was the question. Oh, that's the question. Okay, yes, we ran both. So we ran a fully discreet, well, continuous with switching system where we ran the summer model in the winter and stuff. So that's how I get my Wigley curves and that other figure. So where's the switch? Where's the switch in the simulations? So, okay, so this is the, these trajectories up here, those are from the ODE. I run my summer ODE for zero to TS and then I run my winter ODE from TS to one. And then I run my summer ODE and then I run my winter ODE. So then I get this Wigley sort of behavior. Now, if you were to draw a curve through the middle of that, you would get a Wigley curve. You would get the solution to this. Which would be, we have a couple of questions that I wanted to get to as the bell is tolling. So maybe we could pause this one question. So Linda and Matthew, we'll get to your questions and then we'll wrap it up. Oh, yes, I'm going to ask one question. Yeah, I've seen that in plant models, you know where the insect is feeding on the plant and of course the food source is the plant and so in that, that's the current capacity. Okay, thank you. I have another, well, question about the Leslie Galamay model. Sorry if you're getting caught. No, no, no, that's fine. And this is a, it seems strange to me that the density dependence of the predators is always conditioned by the prey density. But they're all independent. So they each have the current capacity of the prey alone and they're not kind of contributing to this density dependence. Yeah, this is a QCi. Yeah, so that varies per. Yeah, I mean, I've certainly seen models where if you've got multiple. Predators you, there's some, some isn't the whole bunch of independent terms you put them together where the predators are interfering with each other to we could have. But I think was your question why there's not a sum here. Yeah, yeah, something. But in the density dependent. I think I have it in both. So if I had it in one, I'd probably help me have it too. Yeah, no, I'm certainly not going to pretend that this would be the best way to model that. What I'm trying to illustrate is what I did with this data. And I managed to learn something using my model. But it's certainly an open question. What would you have? I think I have you pause that question for after we've already done over. Thank you. Thank you. Thank you. Thank you.
|
In order for models to make relevant predictions about real ecological systems, it is helpful to have data to guide the selection of, for example, parameter values, interaction functions, and dispersal kernels. For many ecological systems, however, data is sparse. Nonetheless, these data can still lead to the development of models that give rise to theoretical predictions with real relevance. Several examples will be presented and discussed in this talk.
|
10.5446/57596 (DOI)
|
to pay my gratitude and respect to elder past, present, and all the land we work and live on today. A few words about myself. I was a computer scientist with a specialization in spatial information. So during my scientific career, I developed information models and techniques to improve environmental data discovery and interoperability. So after my PhD, I transitioned from science to the operation of research data infrastructure so that I can develop practical solutions that have cross applicability and also translate these solutions in practice. So for the past 13 years, I had opportunity to work at different research data infrastructure of different environmental disciplines that include terino, cyromineeral resources, fungi and turn. So in turn, I managed the data services of the infrastructure and also procedure data procedure and policies to meet the turn strategies and stakeholder needs. So few background about fair. I think everybody know about fair guiding principle is a set of high level principle to maximize the use and reuse of digital resources. The example of digital resources like a research data set. So instead of elaborating each of the principle, what I'm going to do in the next slide is to actually give you some important concept behind these principles, which I extracted from the well-known paper by Mark Wilkinson. So if you work in a research infrastructure space, as you can see some of these principles, these are not new, for example, PID, machine readable vocabulary, provenant information, assigning license to data set. So these principles are not new, but why fair guiding principles are important because now we have different data stakeholders like publisher, funder, data service providers, certification bodies who come together and collectively endorse these principles as an important guideline for those who want to improve the data reusability. So the principle primarily focus on the data, but it also can be applied to other digital objects, for example, publications or software or vocabularies. One thing we should remember about fair is it put emphasis on the machine-based data discovery and accessibility as well as human. And there's no strict rules how the fair should be implemented. The principles can be adopted as a whole in part incrementally, depending on the data provider's resources and environment. FESPE project is a three years H 2020 project. This project ended this year, February. So the goal of this project is about building practical solutions that include technical as well as non-technical solution to support the application of principle in research data life cycles. So within this project, there's a work package for which focus on the certification. Within this work package, I led the task on the fair data assessment. The goal of this task is to support trustworthy data repositories to improve the fairness of their data set through a programmatic approach. So to do this, we actually developed our solution into a combination of three main components. First is the data object assessment metric. And the second one is the automated tool that allow users to evaluate the metrics. And the third is the consultation process with the pilot repository. So all these combined together help in improving the fairness of the data set of the repository participated in the studies. Now in the next slide, I'm going to talk about each of these components in detail. So let's start with the metrics. So we developed 17 core metrics. These are here referred to domain, agnostic assessment criteria that is centered generally applicable metadata and data characteristics. So the latest version is 0.5. And we developed these metrics over several iterations through several mechanisms. So first we come up with the first draft based on the work done by RDA Fair Data Maturity model. And then eventually we improved these metrics based on like a focus group study, public consultation, feedback from the data repository. So actually we went through several iterations to improve and further develop the metrics. The metrics, as you know, fair principle are high level guidelines. So if we want to evaluate data set based on these guidelines objectively, then we need to elaborate them before we follow a hierarchical model. So first we clarify each of the principle with one or more metrics. And then you know, metrics can be tested in many different ways because of that each of the metrics can have one or more practical tests. As you can see in this example, the F1 principle say metadata and data are assigned globally unique and persistent identifiers. So for this principle, one of the metrics which we developed is data is assigned with persistent identifier. So the practical test indicate how we can test these metrics. The identifier specify resolve to a landing page of the data object. So these hierarchical models have several benefits. For example, you can incorporate new practical tests or new metrics based on the data provider requirements. And you also can easily aggregate the scoring of the practical at the metric level or at the principle level because of these hierarchical models. If you are interested to have a quick summary of all the principle metrics and the related test, you can refer to the journal article we published last year. But if you would like to get more detailed information about each of the metrics and the practical test develop, you can have a look at the metric specification. The metric specification provides not only the description of the metric and the test, but it also provides you some background information why we developed such metrics and also the constraint and limitation of the metric as well as related resources. So now I'm going to talk about the second component of our approach, which is the automated tool known as Fuji. So Fuji is Fuji support for data assessment based on the REST API. We published Fuji as an open source. It's available on the GitHub for everybody to use it to further develop. And in addition to that, we also developed a front end of the Fuji, which allow user to easily test their data set based on the Fuji REST. So the front end talk to the Fuji API to do the assessment. So the Fuji makes you several resources to do the assessment. First, it use the metadata of the data set that includes metadata included on the data page. It also retrieves metadata from external services based on the content negotiation. It also use the actual data file associated with the data set. So we also analyze the data file using Apache T-card to get some information needed for the assessment. And it also use the repository context. For this, we use Retreat Data and also other metadata provision services protocols, for example, OARP, Mesh, or CSW, to get more information what are the metadata standard supported by the repository. It also use some auxiliary information from the other fair enabling services, which I'm going to talk in detail in the coming slides. So this is the high level flow on how Fuji works. This is the most simplified, but actually Fuji does many things. Just to give you some idea, in order for you to use Fuji, what you need is an identifier. So this can be URL or PID of the object. Additionally, optionally, you can also provide the metadata access point of the object if it is available, for example, OAM, OARP, Mesh, and point. So once Fuji uses this information to extract the metadata from the landing page through content negotiation, type links, and several ways, and then it also use the information from the service endpoint provider, for example, to get list of metadata standards. If it is the persistent identifier, Fuji does more. If you try to get that information, for example, from the PID provider, as well as, for example, data side, with the data side, you can also get the metadata information. At the end, Fuji collate all the metadata information with it, and then it passes that and then it start the evaluation. At the repository level, which is on the right side, as you can see, we want to know the repository information, for example, what kind of metadata standard they supported. For these, we use services such as Retreat Data, because if you use DOI, you can get, from the data side, the client ID, and then from this client ID, you can hit the Retreat Data API to get more the repository information. So I'll talk about now the auxiliary information. So we use several external services, and you can categorize these services into two. The first services we call as the Lookup Services. Lookup Services is mostly used to validate the description included in the metadata. For example, if the metadata have a field and the value of the license, so we actually cross-check the SPD Ed service to check if the license is open or not. So this is, we call it the Lookup Service. And the other kind of service we use is to get the contextual information about the repository, which I mentioned before. So these repositories, for example, metadata standard supported by a repository, for example, Retreat Data and in data side. In addition to that, when we started development, there were not many machine readable services available to support the full automated assessment. For example, file formats. So we also manually created file formats for long term storage from several resources that include the ISO digital format recommendation for long term storage for front-line-to-feed format. So we also manually created leads to support the assessment. And Fuji inaction. So this is an example how Fuji works. If you go to Fuji.net, you can add the data set you want to staff. It can be PID or URL. And then when you start the SMN, it gives you a summary of the fair aspect of the data set. And you also can get detailed information about each practical test and how these tests are conducted. So now I'm going to talk about the third component, which is the consultation process. So the goal of our task is always to help data repository to improve through fair data assessment. So what we did is, when we developed the Fuji and metric and so on, we had five pilot repositories of different disciplines. So we actually did several iteration of consultation. We started with the repository, provide information about the repository, like, for example, what kind of data set they have, which data set they want to improve. And then based on this information, we use Fuji to evaluate those data sets. And then we provide recommendation back to the pilot repository. And the pilot repository based on that recommendation, improve for the fair services. And then we run the second iteration to test again the, and then we repeat again the test. At the same time, working with pilot repository also helps us to improve the, in particular, when implementing the practical test associated with the metric. So it's really an iterative consultancy process with the pilot repositories. So these are the repository involved during the first stage. As you can see, these repository deal with different data domains. And these are the results of the assessment before and after the, after they improving their first services. So basically, if there's like a four small boxes, each represent the fair score by principle, F-A-I-R. So on the, as you can see, the more right to the, the more right the bar are, the better. So as you can see, in comparison to the first assessment, after they improve their first services, these data sets are the score of the data set increase because of the improvement. If you want to get more information about the assessment and the results, you can refer to the paper, which also included the full citation at the end of the slide. Some information about the updates, as we know, Pooji is an open source. Now, when we started, there were only two contributors, myself and Robert, but now we have 12 contributors. In fact, we also have someone develop our client for, for Pooji, this is totally unexpected. It's a big surprise for us. And we also did many assessment of the data sets, nearly about 10,000. Also in addition to the five repository, we also tested, Robert also tested other repositories following, and during the next cycle of the project. And we also have published two articles in the journal about the tool and also we have given several invited talks. Now I'm going to talk about our lesson learned in developing the metric and the tool. First is about the, about challenges in translating the principle into metric. As you know, automated fair assessment can save effort, but not all component of the fair principles can be translated or evaluated automatically. For example, some aspects like rich metadata, accurate or relevant attributes still require some human mediation process. And also some principle in fair need further elaboration. So when we implement these principles, we always need to take into account the current data practices. For example, F1 oversimplify the assumption of registering data and metadata object with permanent identifier. And this is not in line with the current PID practice. If you look at the data site, landing page best practices, data should resolve to a landing page and that landing page should include information about the metadata. I2 is about fair recovery. When we started the work, what is fair recovery is still a work in progress. And the A2 is about, A2 states that the metadata should be preserved even though the data is not available. But one can only test these when the data set is deleted or replaced. And also preservation of metadata is actually should be tested at the repository level because the repository does not implement different preservation strategy for each of individual data set. Nobody has a preservation strategy applicable to most of the data set or by the collection. The four we think A2 should be addressed not at the data object level but at the repository level. And the other one, so our approach is instead of adhering to the literal interpretation of the principles, we took several pragmatic decisions when we were developing the metric and the tool. First, we decided to focus on the generally applicable data and metadata characteristics until the domain community have an agreement what are the metric applicable for the domain requirements. And we also build the metric based on the existing work and the practical tests are designed based on existing data and web practices. And third, as I mentioned before, we used the hierarchical model. So with these hierarchical models, one can easily incorporate new metric and also expand existing practical tests. And finally, and these domain-specific metric development in the context of Fuji will be further developed by Robert Hooper as part of the upcoming Horizon Europe Project Fair Impact. The other thing important when it comes to fair data assessment is considering the level or the granularity of the data object. And this is important because it will influence the assessment results. For example, these are three repositories. The repositories A have an experiment. Each experiment have a data group and each data group have a data set. If you can see repository B, you have a collection. One collection may have several data series. In the case of repository C, the way they package the data set, you have one collection which may have several data sets and each data set may have one or many files. So all these are considered of the object. It just they are packaged differently. So which means that if you evaluate the fairness of object at the higher level, at the causal level, the result will be not same at the object at the final level. For example, this is an example of Fungia data sets and this data set, this is a data set and it has a PID, DOI. And this data set have several data series. Each of these data series also have a DOI. So when you evaluate this data object, sorry, this first on the left side, this data set, the result will be not same as the result of the data series because also both of these identify with PID. The data series has more information. For example, the actual data files. That's why I think it is important when you do programmatic assessment to actually know at which level of the object you will be assessing. The second thing is about restricted data. So one of the big question around automated fair assessment is that how can we perform an assessment of restricted data programmatically in a systematic way? Currently, so the way we do in our tool is that we communicate to the user when the data is not accessible and then we stop the assessment there. So I think the further research needed to how we can negotiate secure access and we can perform automated assessment over restricted data. Performance matters. Performance here mainly referred to the time taken to assess the data sets. So we took some pragmatic decision. As you can see here, this is one data set. It has about more than 1000 files. Instead of evaluating all the file, we choose random file and this number of random files can be pre-configured in Fuji because the data provider, the way they package all these files are similar. So instead of evaluating all the files, we just choose a random number of random files and then perform our assessment over these files. In addition to that, we also cache all the information we retrieved from the external services so we don't hit the services every time we want to do the assessment. At the beginning of the mic talk, I mentioned that FAIR, this is not new for data repository or recession per structure. Therefore, the assessment should take into account how the repository operates. What are the common data practices the repository follow? What are the common technology usually repository use to do data creation and publications? That's why I think for this kind of work, it's important to keep the data service providers that include data repository and research infrastructure in the loop. Object makes repositories. In our experience, we think the assessment should go beyond the object itself because the FAIR is all about objects. So we need to also have requirements to actually assess the data repository. The requirement at the repository level is also important because you can use the metric FAIR metric to assess if the data repository guarantees the provision of the object while the requirement at the repository level, for example, called TRASIL, these are essential requirements to ensure the long term preservation of the object USS. So I want to conclude with my title, FAIR, is it assessment or improvement? We should assess the data set to improve their fairness, not the other way around. And improvement is an ongoing effort. It needs a lot of resources. And it also should, importantly, it should take into account the repository context. Should build on existing data and web standard practices. So there's a lot of work when we started focused on building metrics, scoring, badging, recommendation, but I think now it's time to focus on the outcome. So the big picture, what is the, why we are doing this work? We are doing this work because we want to improve the fairness of the data, which is can help improving the reusability of the data. So I think that with that in mind, I think it's good to have the output because the outcome won't happen without the output. But I think what should drive is actually the outcome, not the output. Thank you.
|
Funders, publishers, and data service providers have strongly endorsed applying FAIR principles to maximize the reuse of research data since the principles were published in 2016. Much of existing work on FAIR assessment focuses on "what" needs to be measured, which led to the development of assessment metrics. However, the questions of "how" to measure the FAIRness of the research data and use the assessment results to improve data reuse haven't been fully demonstrated in practice yet. This presentation will cover some insights on these aspects derived from the development of a practical solution (F-UJI) to measure the progress of FAIR aspects of data programmatically.
|
10.5446/57516 (DOI)
|
I'm from a small plant science institute south of Berlin. You may have heard of it, Leibniz Institute of Vegetable and Ornamental Crops. And I want to talk today about a plant topic. So actually the transport efficiency of heat and mass through the boundary layer of elliptical shaped leaves. To give you a short overview of this topic, so may not, anybody of you is familiar with this problem. So we have here a very abstract scheme of a plant with two leaves and an underlying soil and an overlaying atmosphere and different energy fluxes and mass fluxes. So we have the short wave radiation fluxes from the sun and the sky and long wave radiation fluxes arising from the sky itself and from the leaves and from the soil. And those radiation fluxes are the driver of mass and heat flow from the leaves. And this is a fundamental problem in plant science to understand the mass and heat flow from the leaves because leaf temperature, transpiration, et cetera are determined by the leaf energy balance which is shortly shown here. The balance between the absorbed short wave and long wave radiation fluxes and the ingoing fluxes of a latent heat flux and a sensible heat flux and then emitted long wave radiation. And this is the base equation to start from to describe transpiration and leaf temperature and dew and so on. So, and here in my talk we want to focus on this H term, the sensible heat flow, that is the heat flow itself. So more why we focus on this problem in general. So heat and mass flow here, carbon dioxide and water vapor, are fundamental to proper plant function and survival in natural and environments also on greenhouses. And especially if we going to towards a quantification, then the equation describing the heat and mass flow are a central part of computation modules for leaf photosynthesis, transpiration, dew, thrust or even in thermographic vegetation analysis. So, this is a fundamental problem or even in thermographic vegetation analysis. And in those models and model system, I found that the effect of leaf shape and leaf inclination is somehow neglected or are under represented in describing the heat and mass flow. So I had a closer look on heat flow under the conditions of free convection. So what is meant with free convection, because not anybody might be familiar with this term, we usually we make the distinction between forced convection, which is driven by an air stream, which is generated external to the leaf scale. So say by winter or say by movement of a plant and we have on the other side, we have the natural or free convection, where density gradients close to the leaf surface generate a fluid flow close to the leaf. And usually this belief, it's known that in both cases for forced convection and for natural or free convection, there is a speed gradient close to the surface and this speed gradient induces a share and the share drives a transport of heat and mass molecules, scalars and so on. So we want to focus now on the right process of natural or free convection, so driven by temperature gradients between the leaf surface and the air and the bulk air, and the air and the bulk air volume. And very frequently this process is quantified by simple correlations, so-called nethered numbers, non-dimensional numbers or shareboard numbers for mass flow, and you may a simplified version of such a simple correlation would be the expression for mean heat flow per area, so we have a temperature gradient and we have an angle of our surface between the normal and the vertical and we have some characteristic lengths. So these are the main parameters, the temperature gradient, the characteristic lengths and the angle of the surface. And so we want to focus on elliptical leaves because this leaf shape wasn't, so usually only very simple shapes are usually considered as circular shapes or rectangular shapes. And so here we have a look on elliptical shapes and the surface may be inclined vertically and you see the characteristic length is a kind of mean, on a vertical surface we would have such streamlines of air flow and the characteristic length is a kind of a mean streamline length of this air flow. Of course, if our leaf is rotated somehow, those streamlines must not be aligned with the longest axis of the ellipse, so they can have any direction over the surface. Somewhat different is the situation if we have horizontal horizontal leaves than usually the air flow, here you see air flow from heated leaves, and comes from below and would leave the surface and abound in the center of the leaf. So some open questions for this problem would be, can we describe this mean streamline length for different scenarios, different aspect ratios of ellipses and different angles, including those horizontal scenarios. And actually how accurate is such a simple correlation compared to wind tunnel experiments or alternatively as here computational fluid dynamics simulations. And what about the different leaf sides? Do we need maybe different relations for the upper leaf side and for the lower leaf side? And this may have an influence for due occurrence on the upper leaf side or lower leaf side. So this talk is mostly about these open questions. So I had the following objectives to find analytical expressions for the characteristic length, to implement a virtual wind tunnel using an open source computational fluid dynamics tool and to use this virtual wind tunnel to generate data sets to calibrate new natural convection correlations for elliptical shaped leaves. So coming to the first point, those relations were found. So here we see the first step was to find the angle of the steepest S-kent, the angle with the smaller axis of the ellipses. So the steepest S-kent is denoted by the vector S. And depending on the rotation angles of the ellipse around the y-axes, this steepest S-kent angle can be derived. And having this angle, the mean streamline length over the ellipse can be estimated, which is where the equation is given here. This is not too hard and relatively simple as shown here. I used the equation for the ellipse and the equation for the line. And setting both angles, you get equations for the crosspoints, for the crossings of the line and the ellipse. And the distance between both crosspoints can easily be formulated. And then you can consider the mean distance between the origin of the ellipse and the farthest point. So here denoted by the intercept with the y-axis. So you integrate this distance or you average it out. So you get an analytic expression of the characteristic length of the ellipse given the angle of the steepest S-kent, which defines, of course, the slope of the line. So this done. There was left a problem for horizontal leaves. This is actually more difficult, but it is all more easy how you want. From stochastic geometry, it is known that the mean random chord length can be easily obtained by the surface area of our two-dimensional surface and the perimetre. And so this would be a good starting point. And besides that, it is known from true wind tunnel studies that the ratio between the surface area and the perimetre is a good length scale to describe a natural convection from different shaped surfaces. So we stayed, or I decided to stay on this expression for the case of horizontal inclination, for the limiting case of horizontal. A second objective, I wanted to implement a virtual wind tunnel for obtaining data sets on different inclined and dimensioned ellipses. And we used as CFT computational fluid dynamics software, we used Openform, which is maybe, I'm sure, quite well known to many of you. And there was a semi-unautomated workflow. So first of all, I created the 3D leaf models, which had a finite thickness. And this was done in Salome. Ellipses with different aspect ratios were generated with different sizes and rotations. And they were the different surfaces, so the upper surface, the edge and the lower surface. Those surface measures had to be marked in Salome to simplify the heat flux estimation later on with Openform tools. And inputting those surface measures of our 3D models into Openform tools, into Openform meshing tools, BlockMesh and SnappyHacksMesh, it was possible to generate an appropriate 3D mesh of the wind tunnel, including our elliptical leaf model here, here shown wide. And having generated this 3D mesh simulations were done, I had problems with the steady-state solver of Openform, so I used a transient solver called in this version of Openform, Bavarian-Boussinesque Pimpleform. And I had to simulate about, depending on the forcing of the temperature difference between the leaf and the bulk air of the wind tunnel, I had to apply different transient times until I got almost steady-state conditions around the leaf, almost steady-state heat flow conditions. Computing time was okay, one to five hours, and overall 350 hours of computation were done with two PCs and Windows, even with Openform you can use Windows very well. And then all those simulations had to be post-processed for heat flow computations for both leaf-side, streamlined-resolvation and the model fitting of our simplified model to describe natural convection from these surfaces. I used here also Openform tools, para-view for stream-man-resolation and mud-lub. There are not actually not many data to validate those CFD simulations, but one data set was obtained by Hassani and Hollens on an 8 cm diameter circular disk, which were inclined vertically or horizontally. And I simulated the bulk heat flow, the bulk from both sides, because measurements were from both sides. For those temperature gradients, we can see that simulations and real wind tunnel data matched quite well. We got some trust in our workflow, in our setup, and did all those mentioned simulations for different aspect ratios, sizes and rotations of ellipses, and used those data sets to estimate the heat flux for the upper and lower side of the leaves. And used this relative simple Nusselt number model to describe simulated data. In the end, all those equations are motivated by previous work in literature, but I decided to re-estimate six parameters from simulated data. This was the outcome. We see in graphical terms predicted heat fluxes by the simple model and simulated heat fluxes matched quite well. Overall, we simulated 356 leaves, and our parameterized or calibrated Nusselt number model had only an error of 4%. The previous model, which I obtained from the literature, had an error of about 30%. At the end, we obtained a very good, simplified description of this complex process. To show you some behavior, how simulated leaves behaved, and how the CFPT simulations and simplified model predictions matched each other, you see here simulations of heat flow from the lower leaf side and from the upper leaf side. Here, in the figure A, we have a circular shape, and in figures B, C, D, we have this elliptical shape. This is such how you can imagine the rotation around the x-axis, which is shown here as an x-axis of the coordinate system. We start here with a horizontal orientation, and 90 degrees is the vertical orientation of our circle or ellipse. Heat flow is increasing with rotation around the x-axis. We have a higher heat flow from the lower side than from the upper side, and both behaviors match very well. So CFT simulations and simplified model match here very well up. Graph C and D are a bit of special. In graph D, I changed the direction of heat flow. Here, from A and B, we have heated leaves, and in D, we have a cooled leaf. It turned out, what also literature says, you can use the same correlations, but just switch the sides of your simulated object. The switch equations used for the lower and the upper side, and switch the sides. The graph D is just a mirror image of the graph B. Those data were not used for calibration, and a verification of a known issue. Graph C is a little bit different. It was pre-rotated along the epsilon axis, and then it was rotated along the x-axis. We obtain a totally different pattern of heat flow with rotation, and almost an invariant. This y-axis can be quite well seen by visualization of the streamlines. Here we see a visualization of the streamlines for different rotations along the x-axis. It's almost horizontal, but it's rotated along the y-axis. That's why it's almost in the direction of the smaller axis of the ellipse. With increasing, if the ellipse is more rotated towards the vertical direction, the 50-degree streamlines are more aligned with the longer axis. The characteristic length is increasing, and this offsets somehow the promoting effect of the increasing angle. In that in sum, the response is a little bit leveled out here. We have now increased the heat flow with rotation, so it stays more or less horizontal in the response. That can be explained by the different. It can be simulated, and it can be explained graphically. Coming to a summary, in this work we derived equations for the characteristic length inclined and rotated ellipses of different aspect ratios. We re-parameterized known engineering equations for natural convection from those ellipses using CFD simulations. By this re-parameterization, we improved the prediction by 27%. Those new relations are not a large improvement, but it is an improvement in a topic which is quite heavily researched over the last 50 years, I would say. Those proposed equations may enhance forecast analysis, especially under low wind conditions. Say at night, they may have special importance. For dew and frost occurrence on leaves, they are useful for prediction of internal CO2, temperature, and stormwater conductance of leaves, which are part of calculation model roots for leaf photosynthesis and transpiration. We found that open foam and salome are quite powerful tools to solve almost any fluid heat flow problem and even complex or conjugated problems. Here I could only show one tiny part of this analysis. If you are interested in the further details, you may consult this publication. At this point, this must be not the end of the analysis. We did some simulation with ellipses which are folded along the mid-rip, which you can see in nature quite often. I am looking forward to use those data sets to parameterize, to extend the Nasset number model even for folded leaves, which you find quite frequently. Of course, it would be more valuable if one would have a true mixed convection model, because leaves are operating under wind. Usually we have a kind of mixture of natural and forced convection transport processes and natural ecosystems. This is possible with open foam, especially if you can consider laminar flow. I forget to say that all simulations were laminar here, because we consider turbulence as too rare to occur. I am a bit like an outlaw in a plant science institute with CFT simulations. This problem is a little bit out of our opportunities in terms of setup and available computational resources. Of course, the most challenging analysis would combine radiation, heat and mass transport through pores. All happens together. An optimal case should be simulated together. Thank you for your attention.
|
The efficiency of heat and mass exchange between leaves and their environment under low wind speed is dominated by free convection. This is commonly quantified in terms of the Nusselt number (Nu) and the Rayleigh number (Ra). The currently available Nu = f(Ra) relations for inclined plates were mostly derived for infinite wide plates or from one-sided heat transfer studies. A comprehensive simulated data set of laminar free convection may be used to derive new Nu relations at any inclination and for both plate sides. The relevant equations for free convection in 3D are solved numerically using the computational fluid dynamic (CFD) software OpenFOAM. The simulated Nusselt numbers agree very good with previous measurements for vertical and horizontal circular plates having a diameter of 84 mm. Various finite thickness (0.5 mm) elliptical plates (i.e. leaves) having aspect ratios between 1 and 3, plate length ranging from 30 to 160 mm and a range of inclinations are simulated with plate to air differences set to 1?12 K. Simulated heat fluxes from each leaf side are used to parameterize a comprehensive set of Nu relations.
|
10.5446/57599 (DOI)
|
So, the topic of today is like important of relative special information in even based surveillance system. So, I am a PhD student of Sirat and I'm working in the laborti thesis and my supervisors are Matthew Roche, Megalon Tisse and Elena Arseveska. Why relative special information are important in even based surveillance system and how we can extract some of these relative special information from information sources, social media and other sources of health events that are the main data sources of even based surveillance system. So, I will briefly explain the background and then the methodology we adopted to extract those relative special information then I will share the results and what will be the future work or the future direction of this work. So, I will start with even based surveillance system that these systems look into the sources that could be social media reports etc. and identify the potential health risks and which is known as events. And this in even based surveillance system these sources are mainly like social media, digital news sources and some other health reports etc. Here is some example of an event of African swine fever in which like the text is like it is text extracted from some news source that the Russian authorities announced new cases of African swine fever in PICS in Amur region near its border with China on Wednesday. So, this is an event and the event is like in which region it is occurred it is Amur region the notification date was like Wednesday so it is according to that context and which disease it has like African swine fever and which host that contain that disease. So it is a perfect example of an event outbreak and the trigger is announced confirmed are like that type of text in the news sources. So if we identify the main pillar of the event so there are mainly three pillars particularly in an event. This one is special we can say that where it occurs like where the event occurs then the temporal pillar is like it is like when it occurs and the thematic like which disease which host and number of cases are other supporting which is related to the disease. So what type of information we can extract from it is like which disease it has host location the date of the event number of cases maybe if it is a symptom we can we can we can find its symptoms like for example fever etc. So in this work we will mainly focus on the on the special issues and how it is useful or what is its important in an event based surveillance system. So I highlighted some of the issues of special information the first one is like in accuracy in the identification of special information so it means that if we have if we extract that special information from the text maybe it could not identified by by by event based surveillance system like maybe the text is written or it has some spell mistakes or something like that or maybe due to some inaccuracy in the event based surveillance system so that it could not identify that location. The second is like in accuracy in identification of region of outbreaks so I would say that for this like maybe we have an example like so Montpellier so there is some outbreak in Montpellier and it doesn't identify like it is it is the Montpellier from from France or Canada so this is an accuracy in accuracy in it and which will results into misleading special information. Here I will explain like two examples so France has detected a highly pathogenic strain of bird flu in a pet shop near Paris after an identical outbreak in one of of Corsica main cities so here you will see that that this this outbreak occur in the special issue like the location which is near to Paris then the second example is like it is stating like 69 TV cases are reported in east of Cami in in the current month so it is referring to a location which is east of Cami. So what what the normal what the normal event based surveillance system do is like it identify these location as like it doesn't identify near Paris but it is Paris so if if we if we see the outbreak or the event location so you will find that this location could be this polygon. In the second case like the 69 tick born in in Kepulitis are reported in east of Cami so if we if we find the region of it it will it will refer to to the Cami not the east of Cami so in the text it incorrectly identify the near Paris and east of Cami so what could be our desired result so in the desired result the outbreak location could be near Paris and east of Cami and it could be somehow like the the near of Paris is like the surrounding of the near regions to the Paris and if you if you are seeing to the east of Cami it could be the polygon which is in the east of the the Cami instead of the whole Cami. So we we we know that that these these special information are are not pointing directly to Paris but in relation to some to that special information and in here it is also like you you are referring to some relation to to a special information which is east in this case and in this case it's near. So so in a broader context we we categorize it into two parts the one is like the absolute special information which directly refer to the geographical location like in that case it was Paris and it can be identified using the state of the art name entity recognition system and that are like Paris, Italy, Germany, Asia, Europe and what are relative special information the second one is like it is in relation to that geographical information which could be in the form of West Paris, South Italy, East Goa, North Wuhan, Northwest London and there are plenty of options as well other. So in my work we we we categorize these relative special information into further three types. The first one is like cardinal relational special information which is like the special information in which the first the first text could be a cardinal like it could be northeast west south and the second one is the ordinal which is like followed by like the northeast followed by a special information like northeast, southeast, southwest and northwest so we in this we have four relations in the first one we have four relations and the third one is like the keyword followed by the special information which could be a lot of keywords but in this case in our work we identify only the central the near and the surrounding keywords followed by the special information but it could be a plenty of other information like surrounding central of Paris border of France near to Lyon close to Montpellier and number of miles away from from a particular city or from a particular border or from a particular country so it could be plenty of other option and it is difficult to identify this type of relative special information. So the relative special information extracted from the text with two components. First we need to extract it from the text and then we identify its coordinates or its geographical information. So we develop the first component which is we known as GX and also we could say it like geo parser and geo parser is the system to identify automatically recognize the place name in text and disambiguate them with respect to a visitor. So for this we use the state of the art NLP toolkit, a spacey with the with the geo name visitor to identify the patterns in which these relative special information occurs and then later on we pass it to an entity ruler to extract those patterns and it will identify the relative special information from the text. It can be like it is developed in a sense that it could be integrated to any like any state any geo coder or you can integrate it because it's developed in a standard way so that any any geo coder can can use this to integrate this component into it. The second is like we develop a custom algorithm to identify the geographical information of such identified RSI in the form of standard geogeson files and we can pass that geogeson file to any geographical information system such as like open street map visualization it can be used by any any any other GIS. So here is the process pipeline of our work like we have text documents or we can say like these are the sources for the sources for the event based survival and system so we identify we replace like we perform the pre-processing in it and then we with the state of the art space tool we identify the pattern and we provide it to the entity ruler to identify these type of relative special information and later on we we provide it to the geo coder system or the geo tagger system to identify these type of relative special information geogesons. This work is accepted and it will be presented in next month in the conference agile which is related to your which is related to the conference name is artificial intelligence related to special technologies and and this work is also reproducible and the results are verified by by the reproducible experts. So here is the example we include few examples of different relative special locations that are southeast Paris, west London, east Florence and nearby Leon in the sentence so what it did is like it identified these type of relative special information from the text and then the second component the geo tagger it identify its geographical coordinates that are in the form of this. We can try this the demo as well so we have this component which I already told you about the geo parsing or geo x which will extract the text from the which will identify the relative special information in the text so I put some sample text which I explained in my slides so we can we can try it with this and we can extract that which is near Paris east of came and that are extracted from the text then in the second text I put a lot of relative special information so that you can test it at your end as well and then it could be extracted from it so it is on the fly you can you can identify like for example North America South America South of Germany in northeast Belgium north surrounding of Montpellier nearby Leon West Bolzano etc. So this is the first the component in which like you extract the relative special information from the text then in the second demo we have we have this and I I just shortlisted some of the example from for example for Paris we have Paris and then we have southeast so it so you have the geogasin and you can you can visualize it by this way or you can import that geogasin into any other geographical information system tools as well so this is the southeast of the Paris similarly if we if we go for another like for example GB London and we say it so west of London so this is somehow like west of London which are more close to it and the thirdly like you can you can check like nearby Leon for example FR so this is like the nearby to Leon so the coordinates are the polycosic shows the nearby locations to the Leon that is for the geotechers and the first one was related to GeoParser so this this demo you can use it Valentina will share it with you later after the talk then these components are verified by by are validated with the with the diseases data set so I validated it with the five diseases in the first column like you have the disease name which is I validated with five diseases which is AMR COVID-19 Avian influenza Lyme and tick tick bone in comply test for AMR I have I validated with 25 news articles and it contains like four relative special information is extracted by my tool and actually it has five so with a recall of point eight and F score of this similarly for for COVID-19 it we validated with hundred articles and we extracted almost like hundred hundred special information but actually it has ninety two with precision of point eight seven ninety four point nine four and point nine zero similarly for Avian influenza and the overall precision is point point nine with the recall of point eight eight and the F score of point eight eight two for that particular relative special information similarly like the geotagging is like to validate the shape so we categorize we did it by qualitative analysis of the shapes like to look at the shape and to see that if it really looks the similar or not so for it it was a bit difficult as well because for us like we are not geographical experts that that exactly knows about the shape are which are the the region that administrative areas in each of the cities so for it we have the average score of four point three five and and we validated it with the six cities which is which are Milan and Paris Montpellier Nantes, Veyron and Vainoble these are the results for geo-partsing and geotagging and I conclude my work with like we developed two prototypes for geo-partsing and geotagging for relative special information so geo-parts, geo-axis for extraction of relative special information from text and geotag to resolve that that extracted relative special information into geographical information in the form of geogasin standard geogasin file so in future we will do like the noise detection that it has some noise to extract like some irrelevant relative special information are detected from the text so we have to reduce this as well then identification of complex relative special information and integration to some existing work and the third is like to to evaluate the shapes and which could be by by experts and to build a corpus that could be useful in future as well for improvement so that's all from my side and if you have any question I can answer you. So the chat we have Elena who thank you for your original work and do you plan to implement your work in mood and for example in paddy web and when do you think that can be achieved? Yes so so for we are planning to do it like first to do the the evaluation of this like we we could have more shapes and we we build a corpus for it and when once it is developed so we will we will do it and integrate in the existing geocoder for for paddy web. I think there is more comments even from Elena and a challenge in location extractions from news is to keep only the location where an outbreak happened or a case was declared and not all location mentions in the news do you have any comment about that? Yes so so it is like for example if if we if we go to the simple scenario like the outbreak is occur at near to Paris but we identify like Paris so so it is stating like a wrong location so it's really a challenge that why we are not referring to a correct location about it. And so what is the plan to kind of reduce the this inaccuracy? So to reduce inaccuracy it will the the thing is that we have to validate this type of relative special information and I told you that it is it it is occurred in the mostly like the the news sources related to the diseases so and ultimately it will the goal is to to integrate it into paddy web. And what would be the smallest spatial let's say measure you could aim for or what does make what makes sense? So the smallest is like the difficulties in both in both side like to extract it from the text and once it is extracted then we have to identify the the exact polygon or the region where the outbreak comes. So it is it is always a challenging and a pioneer work in this in this in this domain. But with the spatial scale what would be the the smallest spatial scale that makes sense to have or to know? So the smallest the smallest spatial scale could be like for example if we say like 50 kilometer from from some region. So that could be a point or I would say like it could be a point but in some cases it could be a polygon as well. So the the narrative will be could be from a point to a polygon or multiple polygons. And from a practical point of view so for example from at the point of view of a practitioner who has to monitor this stuff what would be the ideal scale is it important to know the 50 kilometers from a point or is it more important like a region? So it is like sometime like in the news it occurs like it is in a radius of 20 meters square. Sometimes it's in the form of this and sometimes it is it is like from from a distance from a point. So it depends on the in the in the text that how it is but the both are important. Related to my PhD I work on the different data sources that are social media and the digital news sources and for which I taken into account the for different infection diseases case studies like such as COVID, avian influenza and other two and in the context of of mood I will do participate for the event based survival system in the context of one health. So to take an into account these aspects I will mainly participate in the in the data science part. So to apply the text mining techniques to to highlight the the special information issues either it could be a temporal issues or other even some domain related issues like thematic aspect with with each case studies as well. So related to the special information it is it is important because we must know that precisely know that where the event outbreaks occur. So in my context it is like to to to to to remove the inaccuracy to reduce the inaccuracy related to the special information and in particularly highlight the relative special information issues. EBS is like I read about different articles about this relative special information and though they are explaining it that these are the relative special information and related in related to a special location or something but there was like no proper methodology to how to extract this type of relative special information from the text. So for that it was it was a challenge for me to to how to extract that that information from the text. And so I was trying and with some algorithm and I identified it and to to identify that patterns and that was a breakthrough for for this task to to get it and still like it is a breakthrough we initially have something and we will progress on it to be to be more efficient or to be to enhance our work in this direction. So the thing is that I mainly like I will focus on the integration of generalization of events like in the context of like plant surveillance either it could be animal surveillance or human surveillance. So in the context of one health how to how to generalize an event and its all aspects that is the first part and and the second big challenge is like to take an into account the multilingual aspect of of the sources. So if we have sources in multiple languages how how to extract our how to know our how to extract information from from that languages. So these are the two major challenges in my PhD and which I want to achieve it or at least to to to have some good results about it.
|
Syed Mehtab Alam is doing his PhD on topic “Generic methods for epidemiological monitoring based on the integration of heterogeneous textual data” funded by H2020 MOOD project. His PhD work focuses on the development of generic methods in order to extract new and relevant event in heterogeneous data textual in a One Health context. Previously, he was Research Associate in University of Bozen-Bolzano, Italy in which he mainly contributed in European funded project “COCkPiT” from 2018-2020. Since 2010, he was a part of different software organizations in which he was involved in different software development project across different domains i.e. Healthcare, European Taxi Systems, Mobile TV systems etc. The presentation is about the importance of relative spatial information (RSI) in EBS. Mehtab described the approach to extract RSI from different unofficial media sources and how to accurately geographically map this information for different events in EBS.
|
10.5446/57543 (DOI)
|
We're coming to the next talk of the D-Space Praxis Treffen 2022, the Integrated Management of Project, Funding and Research Data, the experience of Boris Portal from University of Bern. With me, I have Zumi and Vikato. Thanks for your presentation. The stage is yours. Thank you. Hi, everyone. Welcome to our presentation and thank you for giving us the opportunity to talk about our project, Boris Portal. My name, as already mentioned, is Zumi Sontem. I work at the open science team at the University Library of Bern and today I'm going to present with Vikato Fazio from For Science. He's a project manager there and our main point of contact for this project. I will be starting with some background information on what exactly Boris Portal is about and in the second part, Vikato will be talking about some implementation that have been customized for our repository. Here you can see the services we provide for the University of Bern members and I won't be getting into much details about all of them, but apart from the open access support and research data management support, you can see that we have three icons, the Boris publications, the Boris thesis and the Boris Portal icon, which seem to be very similar. Boris is shortened for Boris, a Bern open repository and information system and in this next slide it will clarify the reason for it. So Boris Portal here in this graph is the institutional repository which uses this base and at the moment only contains research project funding and research data information. However, we do also have Boris publications and Boris thesis using e-prints. In the near future, it is our goal to integrate Boris publications and Boris thesis into Boris Portal to then only have one main platform for all the research information. So we work with the space version 5 for Boris Portal and we went live September last year, so it's not that far away. There are two applications for Boris Portal in our case. For one, the publication of project and funding information was a requirement from the Vice Rectorate Research of the University of Bern, so they are one of our project partners since they are interested in having a collection of those information to have an overview of the current projects at the University of Bern. Another project partner is the clinical trial unit of the Inflaschbietal-Bern, which is the University Hospital of Bern. The clinical trial unit part was mostly interested in the research data repository part. So now I won't be going into much details of the project and funding information, what we do have for the submission, all the metadata and so on, since this is probably something you already know a lot about. But I would like to go into the research data item aspect of Boris Portal briefly. So it is our institutional research data repository of the University of Bern, where you can publish research data and or metadata, as well as supplementary documents such as code books and so on. You can make the research data item citable, we have a DOI and we also have the Creative Comment Licenses. And it meets the requirements of the most important funder according to the FAIR data principles. And I think yesterday we also had a similar talk about managing the access of research data. So this is, this will be kind of similar. In our case, which is special or in our case, we do have this thing that we do not allow users to upload sensitive data. However, it is possible to upload metadata and it's associated documents and data transfer agreement. Here you can see a little bit more about this managing the data access part. It is, we have four access types, open and barcode restricted and closed and this is, you can put this on every data file you upload into Boris Portal. Before I hand over to Ricardo who will be talking a little bit more in detail about those upload steps. I would like to mention a few things we would like to tackle in the future. It is planned to work on the integration of Boris Portal into the credit opening process which is like an internal process at the university that would make it easier for the researcher to not have to submit the same information to the repository and for opening a credit. Then we would also like to build interfaces to the databases of the research funders such like the Swiss National Science Foundation and of course we would also like to improve on the overview of research activities, faculties, institutes, centers and individuals. As already mentioned before, we are also working on the migration of Boris publications so from ePrints to dSpace and of course lastly also working on and talking about the migration process to dSpace 7. Now please Ricardo you can take over. Thank you. I would like to, we have a lot of work done with collaboration with the University of Bernstaff and I would like to present four issues for requests that we implemented in the repository. The first one is, as Sumi said, is about the upload step because we had the requirement to upload. Obviously for the research data submission we have to upload obviously the file for the data from the datasets that more or less we can say the common files that I uploaded in every submission. Then the data transfer agreement and other supplementary file. We decided to create, to implement and extend the upload step with three forms that allows the researchers to upload each type of bit stream and each type of bit stream is saved in a specific bundle. Also, as you may see from the table, the availability of the uploading of each bundle is based on the value of the metadata access rights that is defined during the submission. Okay, we can go. Thank you. Also, we added the managing of Creative Commons, Creative Commons license for each bit stream that the Creative Commons is saved as a specific metadata for each bit stream instead of having only one Creative Commons license for the entire item. Actually, in the submission you have also the metadata to apply the Creative Commons license to the whole item. This is also used as a default value for the uploaded bit streams that the user will upload, that the submitter will upload. And then obviously you can go into the file list and change the license from there. I think we can go. Thank you. Also we had another requirement to let the people easily upload documents about funding and projects. As Sumi mentioned, Boris Portal is on this space five, so in this space Chris five, so in this space Chris five, the item and the other entities are still different objects, different type of objects, respect of the space seven. So we implemented the creation of funding and projects as a submission style, let's say, to ease the to make the submitter life easier. And also we from the end of the submission of the funding or a project, a new entity, a new object is created is increased. And from the page of the object, you can start the submission of a document. This is again a new submission of a special item that is of type document and they connect the relation between the item and the object is automatically done using the authority framework in the space Chris. And also during submission, the submitter can also pick from the researcher profiles some person in order to give them the permission to read and download those documents. At the end of the submission of this special type, these special items, these items will be visible into the object page with the specific component on the on the on the object page. Okay, you can go. Okay. Also we had to synchronize and the information from the from the human resources database that is called Paris, I hope I pronounced it correctly, and Boris bought from Paris we import the researcher, researchers information and the faculty send departments in the space Chris they become researcher profiles and organize organization units. This is done mainly using ETL transformation that is done using penthouse with and the this ETL take the data from Paris and transform those this data in in the Excel format that is used by standard input framework of the space Chris file. And in this way we we import and update every day the information from the human resources database. The interesting thing here is that we save the unique ID from the human resources database that is called GUID as the external ID of the researcher profile that in this space Chris is the source ID, and also we have configured the login that use she bullet for to to receive the GUID as the net ID of the person. So we have the we have the profiles, and we and as soon as the user logs in via she bullet post login action that is a class that is an action that is that that is triggered as soon as the login is successful, tries to connect the researcher profile and the person using the net ID and the source ID of the of the researcher profile. In this way, the every user is automatically connected to the to this researcher profile without the need to search and try to to connect or disturbing too much the administrator of the of the repository. Next. Thank you. And the last issue, the requirement that we had is to ask Sumi mentioned there is still the Boris publication portal, and that is based on the prints. So we had to connect the research data and the project to the chorus to the related publications in in Boris publication. This is this was done using a look up on the prints during the submission of the of the research data or the project. And this look up use the again the authority framework from this space squeeze and you and queries the the search and point of the prints and the prints with gives as a response a Jason that is in called in citation format. This means that the the data that we receive are a citation of the of the publication and the obviously the e print side. The the idea is saved as authority of these of this metadata and obviously the citation is saved as a value. So we use the the e prints ID to to create rendering to to create a link to the actual publication. But obviously we are also preparing for the import as Sumi mentioned, preparing for the import in order to to be able to recreate the connection easily between research data and its publication project and and its publication. And I think that it's all for me, I hope I will not be too long. And if you if you have any question or please. Yeah. Thank you. We already have a question the creative comments licenses for each bit stream. Will this be made available for all these best Chris five users. Not for now. Because these actually this requirement came came I was came from from University of Bern for the first time for us and then we don't we don't have any many many requests of this kind but it's possible to be it can be ordered to the to the community code obviously. Yeah, so I think that's interest at least in the chat over here to get this feature to these best Chris five. Okay, any other questions. You can open your mic if you want to be recorded if you're okay with that and ask directly or you can type your questions into the zoom chat. What's your approximate schedule for the seven migration. We are still in discussion about when exactly that would be we have at the moment we have planned to first do the migration of course publications like this from ePrints to to these space first and then do the migration to these space seven but it's everything is still in the in discussion work in progress. So it could be yes, we got the place. Sorry, just to add that, obviously, we know that the migration will be done in the near future. And we, we were still burning for science are working on this on Boris portal, looking forward for the space for the space Chris seven so we will not we everything that we implemented, for example, the submission of the of the Chris entities is done looking for the space Chris seven submission that will, let's say remove the difference between entities and items. So this is the our we are not still scheduling the space Chris seven but we are working with this with this vision. Let's say. Yeah, great. I don't see any other questions. So we kind of thanks a lot for this great talk. Thanks to you. And I hope I hope we meet each other at the space seven Chris workshop later this afternoon. Thank you. Thank you. Thank you.
|
BORIS project, based on DSpace-CRIS, is about: -a customized workflow for projects, interoperability with internal project management, and link to project investigators and project results. -custom managing of Documents for Projects, Funding and Datasets Integrations: -Integrate UniBE HR application (PARIS) to synch Researchers and Organizations -Integration datasets with Publication portal
|
10.5446/57544 (DOI)
|
The Space Proxas Treffen 2022, that's finally the Proxas Treffen where we can say it is released. It is there, dspace7.o is out of the door, dspace7. Chris is available and to use for all of you. And we will have the presentation. I think many people are waiting for a presentation about going live with dspace Chris 7, founder of a public car. We have Gio Zeppe and Dirk here. The stage is yours. Thank you Pascal for the nice introduction and the invitation that we can present. As you said, we are in the final stages of the go live. So the presentation today will be a short run over the last years. We have been waiting for, we have working hard on it. And maybe it's a little bit pressure more on the pressure we still have because in 10 days the go live will be there. And we are whole hoping that it will go well. So my name is Dirk Eithinger-Papst. I'm working for the Fraunhofer Competence Center of Research Science and Research Services in Open Science based in Stuttgart at the IRB. And today I tried to talk a little bit about a project we had about dspace Chris and our new public car based on the system. And I tried to make it a little bit fast so that we can have more time at the end for asking and maybe you asked for a preview. I have some screenshots but you won't see anything. It's only an impression. But maybe we have time and I can switch to the service so I can show you something. So let's dive deep into it. Maybe the marketing part is a little bit important for today because Fraunhofer is Europe's biggest largest RTO and to have some figures and facts in mind maybe explain to you why we constructed or organized our infrastructure, publication infrastructure, service infrastructure as we did. Fraunhofer was founded in the year 1949 and until today it keeps on with the mission of combine the research educational part of the research part and the RTO part that means collaborating with industry and other economic partners. So to understand our both sides we had of one side we are well connected to the universities in Germany. Normally more than 90% of our directors hold a chair at a university or a university of applied science. And this is a basis of our old publication infrastructure because we have similar tasks or requirements as a normal university has. On the other side we are well connected with the industry and our partners there. I think you all know things and inventions like MP3. I don't want to dive deep into this. But that means that IP management and patterns and linking patterns and IP management to publication, projects, etc. is quite important for us and we had some linking in the old databases and it was a very important focus for us for our specifications to take this special Fraunhofer need to the new system. This slide shows you in a nutshell what Fraunhofer is. It's wide distributed over whole Germany. It's about 76 institutes and research units. That means we have 76 sub mission workflows because each institute is individual and has own library stuff for example. We have more than 30,000 colleagues today. That means more than possible 30,000 researcher profiles talking in d-space chris language. And we have a revenue only that you see nearly 3 billion of revenue. 70% is done with projects of industry partners and openly sponsored projects. That means that's the main focus we have. Taking this picture of this slide, I want to jump into or switch over to the Publica relaunch project which started some years ago. What was the idea? We had an old normal repository system. It's about a quarter million of publications in there. Some patterns have been in there. The data suffered one prior data migration about 20 years ago. And for sure the idea was to get a new system which should be a change from a normal database with more people of traffic, tasks and the normal literature repository to a new platform which could serve as a data hub for linking and interchanging data, a base, a data hub for all kinds of research output. I think that's the general output we can talk if you talk about the Trinity like literature, research data, research software, patterns, etc. And to get really a toolkit for science and open science. The idea of the data hub is not really a revolution. I think all we are working in the same sector. So it should be a data hub to connect external databases with our internal frown of a databases. Thinking and connecting it to the talks we heard before in this morning about Kibana and reporting. Sure, this reporting is done over a data lake. We have an internal data lake in Frownhofer and Publika. The G-space system is only a source of this data lake. So you don't have really the need to make something like that. But here you see we are part of a data lake like that. We are a source. And the idea is to be the open part of Frownhofer in the universe of different databases, projects and initiatives. So as I said, in 10 days we will have our go live. We would say the phase before is something like phase one. And this was a phase of early adopter. I will talk a little bit later about that. And our main focus was the migration because we had a lot of old data in different quality states and which we had to move to a new data model. We are talking about hierarchy, linking, connections, all this. And so we changed to some minimal requirements in this phase to get the go live done. And now after the go live, we are working on the data quality to add more research output objects. And we have a lot of other projects in mind which had to wait because the go live, as you can imagine, was planned before some year before. Okay, have a short view about dSpace-chris. Why did we talk a decision to take dSpace-chris in our mind as a system? When we started the project, we had a lot of specifications for sure. We have a special Frownhofer needs. But I think there are four main reasons why we decided to take the chance to take dSpace-chris and dSpace-chris 7 at the end as our source code for the new system. First, we wanted to have something which is open source. I don't have to discuss this in this round. I think we all on the same site and that kind. Then we had a special need for a very flexible data model. Third, we had experience with dSpace-6. We implemented our research data repository for that is on basis of dSpace-6 in collaboration with the library code. Thank you Pascal. And we had a good collaboration with dSpace-chris 5.8 for science for internal FIS system. These are the main reasons why we decided, yeah, let's go for the dSpace-chris way. Now, why in this time, I would say we had two possibilities. It was three years ago. It was 2019. And we could call it early adopter or on the snail trail. We waited. We worked on that. It was a hard time. And we saw possibility one is, okay, let's make a migration to dSpace-chris 5.8 and migrate later to dSpace-chris 7. Or jump into the risk of being our early adopter and say, yeah, we go with dSpace-chris 7. And in this decision, we are able to push the project forward. We wanted to be an active part. And this was the main reason why we decided to take the risk and the chance. There are benefits to be early adopter. To be an early adopter, and we did it by sponsoring special features and cooperating with ForScience, a really good cooperation, I could say, in this way. And Giuseppe, who will take over the talk, I think he will talk a little bit about some features we did together or they did for us, a special workflow part, the DOI and the Deposit License part. But Giuseppe, that's your stage later. I try to gain some time because we have time later to discuss and to maybe I can show you a little bit of the system. Yeah, that is only a small impression from some sides of us. You won't recognize anything. It's too small. The main story of the slide is that the layout matrix is one of the features we sponsored to, it helped us a lot to work on our Fraunhofer screen design and to accomplish special needs we had from our central division department. The central idea is that we have four research entries. One you can see the research output, some for the project, some for the researches and our institutes. This is the only thing I wanted to show here. Maybe if we have time, I can show you later more on the system or the life system. Infrastructure, that is our actual working focus. As we are in the final stages of Go Live, we are still testing what is the best possibility, the best configuration of launching the SpaceCris 7 as a system in our productive ways because we have a lot of test systems. We are lucky that we have access to an internal Fraunhofer server cloud so we can create, rebuild, change infrastructure very, very fast. We are using Ansible for that. But I think probably we will start the following configuration in 10 days. We will have two Angular servers, load balanced, a well equipped REST server and we will try to use SolarCloud, the zookeeper and the digital objects are saved or in S3. That will be our configuration, I think. But we can talk later about that if you are interested in that. Lesson learned, I think that is something I was thinking a lot when I built up this representation. I think there is one key lesson I would like to share with you. We had the problem as I said before that we have a lot of data in a really heterogeneous data quality. And we had a change of the data model. And as you all know, the idea how to present an author and an author is linked to a person item in dspace chris is well organized. A link means it is a well known person item. In our old system, and now we are talking about the user habits of our users, all our authors were linked in the old system, but clicking the link was only something like triggering a facet search. So we are thinking about how to present this old data and how to change this data and the new data which should enter over the submission masks or over the other sources and to present them in the same layout using the metrics layout. And as you can see here on the right side, we have a lot of authors, maybe they are not linked because they have no person item in the chris system. But maybe they are authors in many other publications. And so we decided as a solution to build a magnifier button on the right side. And maybe it's not a solution for all the life, but until we have data quality on all data that make it possible to only use the linking as we know it in dspace chris 7 as linking to a special item. And we have the possibility for users to look up the publications of an author in a facet search way as they got used to. Thank you. That was on the fast run and I pass over to Giuseppe that he has time and the rest time we can discuss, please ask and maybe I can show you the system. Thanks Giuseppe. Thank you, Dirk. Hi, I'm Giuseppe D'Igiglio from ForScience and now I would like to focus more on the collaboration in this project with ForScience and FromOffer and what is collaboration bring to the dspace chris 7 code. We started collaborating with the FromOffer since 2019 with an ongoing support that results on in a continuous alignment of the FromOffer public approach code with the dspace chris 7 code. And now we are also supporting FromOffer for the go live phase. As we said, this collaboration brings many benefits to the dspace chris 7 because with the help of this project we were able to improve the matrix layout that is the dendrine used by the space chris 7 to allow to have a configurable item detail page. We also ported just some functionality that we have implemented for this project also in dspace chris 7. For example, the correction request that allow a submitter to request for a change of an archived item if something is wrong in the item. Or for example, the opportunity to create entity during the workflow phase. Next please. Thank you. This collaboration was a very challenge for science. We needed to take care of a large amount of data that this project has. Entities result on a better performance that now we can have on the dspace chris 7 code. For example, we worked a lot to improve the performance on the importing phase with the dbms feature or also with the during the solar indexes. Let's consider more in detail also some of the main customization that we have built for this project. For example, the configuration workflow and the reserve do I functionality are two of them. Can we switch to the next please. Thanks. In this slide, we can have an overview how the workflow of the front of the public is configured. As Durka said in his presentation, there are a lot of library that are working during the workflow. So the default workflow system present in this case chris 7 cannot meet the the exigence of the project. So we needed to build some customization by means of the configurable workflow that is by default provided in the space 7 and the space chris 7. Just to have a quick overview, the workflow is divided into steps. Once the submitter create and deposit an item, the submission go to the relative institute library who the user belongs to. The library team can approve or reject the submission. After the approve, there is a further step where the central library can also approve and this means that the item is finally published or reject and this item can go back to the institution library or for the submitter who don't belongs to an institution library, go back directly to the submitter. So we have a comment left. This is another feature that we have customized in the different publica is based on the current DOI functionality provided on this space chris 7 but it's allowed to request a DOI during the submission phase. The submitter can ask the DOI to the library team that can pre-generate once and reserve it for the submission and after that the submitter can see this pre-generated DOI and use it for what it can use. I think that it's all. Now we would like to wrap up with the question from you. Before we go to the questions, do you want to show a short live demo or shall we start with the questions? We can do both. That's our system in live. Let's see. The gender police wasn't there. Sorry to say that but we will change it. That's only trying to find out. Looking for some researcher like me, then you should find an item and you should see the name in the first step and the orchid ID. You can put the picture. The picture sure is optional. Has to be done by the researcher himself. There's a connection to the institute where we are working. As I said, I'm based in the IRB. We know there's a picture in there. We got contact. Each institute is in front of us organized to another group like you are working maybe more in the IT part or in the genetic part. It's all linked. We can really use the crates news scope which offers us a system to show how all the different topics are linked. If you want to take me the questions or if you have something that you want to see. I have a first question for you. We saw that slide with hardware requirements. How well do you test and scope the necessary hardware requirements? We are still in the space. I think Florian is here in the panel too because it's under his survey. If you are really interested, I think Florian, you want to say something to that? We made an assumption and said we want very potential hardware to set up for our go live. We are still planning to do stress testing. We had a discussion with for science. They said one solar server should be sufficient for the deployment. We want to be sure that a lot of queries are handled. We decided to use solar cloud. We do not use sharding. We only use replication. We hope that we have the three times the solar power from normal solar installation. The stress testing is not already done. We are planning it before the go live. Now we assume that our current configuration which is very potential could lead to a successful go live. Thank you. If I missed it, will the functionality of pregeneration of DOIs be added to the space quiz? If you could answer that, the library could already make a pull request with pregeneration of DOIs for the space quiz seven. I know it misses some unit tests. That's why it's not merged yet. That's something we have to do. But maybe if your pregeneration of DOIs for this project is different from the one we submitted and if it will become available for everybody? I think that if the community is interested in this future, for sure we can put on the space quiz seven main code. Great. So I think something will come up in this way. As I mentioned, we have provided something already as open source code as a pull request against the space quiz seven publicly available. It misses some unit tests. All science has a solution, obviously, I expect this future to come soon. There's another question that I would like to put into the lunch break. Would you be available for some more live demos on the lunch break? I don't know, five or 10 minutes maybe? Yeah, sure. Wait, so maybe we can put it over there. Then I would say thanks a lot for this great talk. Let's continue a little bit in the lunch break. And thanks again to GeoZeper and Dirk for this presentation.
|
The migration from the current Fraunhofer-Publica repository to a Research Information Management System enables the system to extend the scope. It provides more entities, the linking and navigation among them, and the connectivity/interoperability to internal systems. We would like to focus on following customizations: -configurable workflow: how it’s been possible to use the dspace configurable workflow in order to meet needs of the Fraunhofer Publica project. -reserve a doi functionality: how an existing default dspace functionality (DOI reservation) can be customized and used in a different way -further Fraunhofer features eg. correction-request
|
10.5446/57555 (DOI)
|
Welcome back to the D-Space, Anmennertreffen 2021. We'll have a presentation coming. Welcome to the D-Space 7 Testathon. And that's something that I was really happy to see when I came in, the proposal to get this presentation to the Anmennertreffen, because the Testathon is something that I find really important for the community. It's starting on Monday. I'm really welcoming Bram Lutten from Atmire, I hope I pronounced that correctly. Well, take it away. Thank you very much Pascal and good morning everyone. I'm Bram Lutten from Atmire. We are a D-Space service provider and a major contributor to the D-Space 7 release. A major D-Space release presents many repository managers with the true chicken or the egg problem. You want all the new features as fast as possible, but at the same time you also want to wait for the first few point releases so that early production problems are resolved. What if I told you that there's a way to significantly reduce the chance on major issues in 7.0? And what if I would tell you that as of next Monday you can personally have an impact on resolving the chicken or egg dilemma for D-Space 7? So, I would like to welcome you to the D-Space 7 Testathon. The Testathon is an intensive community testing period currently scheduled for April 19 to May 7. These are the weeks that you, me, all of us put the release candidate version beta 5 under some real stress in order to identify the final bugs to resolve. In addition, I aim to show you everything you need to successfully participate in this testathon. A significant amount of testing will be carried out on a central testing server that's already accessible today, demo7.dspace.org. This means that all you need to participate is a web browser. You don't need to be a developer. You don't even need experience as a D-Space administrator. Community members work together on a central Google spreadsheet, the D-Space 7 Test Plan, to make it transparent throughout the weeks of the Testathon, what's already been tested by whom. To show you how the testing works, I'll give you a concrete example of a feature we would like to test. Similar to D-Space 6, but entirely reimplemented, D-Space 7 has a number of interface language catalogs and a button in the top right corner of the screen to switch between these languages. Let's look at one particular test from the Testplan for this feature. If you open a spreadsheet, so you don't have to do that right now, but again the link is bit.ly slash d-space 7-test-test-testplan. But if you would open that spreadsheet and if you go to the second tab, Test Cases, you will see that each line in the sheet corresponds to a specific test case. For this one, the first columns define the test, or for all of the tests, the first columns define the test case. The unique ID enables developers and the community members to refer to a test case. Each test belongs to a feature category. Looking at how many tests we have per category of features helps us to assess how well the Testplan covers the full feature set. The execution of each test implicitly starts with the imperative. As a specific persona, could be an anonymous user, an administrator, somebody with submitterals, you click the URL in the URL column. So the URL is the link to the starting point of the test. For many tests, this is the home page, but it can also be the mydspace page or page of a particular item. Then the description column makes it clear what you are supposed to do or to click on after you've clicked that first URL. And the goal or the whole core of the Testplan is that you compare what you see next with what's described in the next column, the expected results column. So in this particular test, it's pretty obvious that the interface should be rendered in the target language after trying to change the language. However, both these descriptions as well as the expected results, they are far from trivial for many of the other tests. So these six columns together, they are the test definitions. So right now, if you go to the plan, you will notice that you will, you are not able to edit these six columns. But if you are interested in contributing more test definitions, you can do this Google Sheet access request, and we are happy to give more people permissions to write additional tests. Let's now look at the next columns that you can use to record your test results. When you fill out the test results for a test, the first three columns are mandatory. You pick the most appropriate test status, for example, okay or non-compliant, and we'll have a look at the vocabulary of those statuses in the next slide. Then you record the date and your name. Optionally, you can provide more explanation in the comments column. In this case, to clarify why I've put the test to non-compliant, I have stated that the central text on the homepage, the homepage news, didn't get translated when I switch the languages. The next column allows us to assess coverage of the features from the test plan with the D-Space 7 documentation. Finally, as much of the behavior expressed in the test plan as possible should also be mentioned somewhere in the documentation. If you want, you can dive into the documentation and see whether you can find it, but as I said, only the first three columns are mandatory. It's up to you whether you want to do something with that documentation column or not. And lastly, all the tests as tests will be marked as non-compliant, developers will start issue threats on GitHub to discuss the problems and possible solutions. So this column is used to track whether a GitHub issue has been created to resolve the problem or not. As I promised, let's now have a look at the test status vocabulary. Here are some of the most frequently used test statuses. These are most of them, but there's a few more and the full vocabulary is explained on the summary tab of the test plan spreadsheet. So today review test unclear, they are pretty explanatory from what's here on the slide, but tests with marked, with feature postponed mark, they are kind of interesting to learn how good D-Space 7 already covers all of the functionality from, let's not forget, both XML UI and JSPUI. So we brought together two UIs and tried to cover all of the features, but we're not on 100%. So you will see some tests that are written, for example, tests against RSS feeds, and you will notice that the tests, those tests are marked as feature postponed to inform you as a tester that it's normal that you cannot find them right now. So participating in the test at home will make it clear to you what D-Space 7 can and can do at this point. So this alone gives you or your institution a personal objective or some value for your participation. So you don't just do it out of helpfulness for the community, it's really also a valuable learning experience for yourself. So some more statuses. Non-compliance is the most straightforward status to indicate that the observed behavior doesn't match what's written in the column with expected results. So if the expected results don't match, what if what you see doesn't match expected results, you can mark the test as non-compliant. Without anything in the comments column added, that's what somebody will expect. Then there are two statuses, UI problem and user experience that are actually, that enable you to give a little bit of more nuance if the test results were basically compliant. So okay, the test is you see the expected results, but you notice some things, some related things to be addressed. A few examples of what can be considered the UI problem is text placed outside the boundaries of a designated box, fonts being way too big or way too small, padding issues, basically everything that makes a page look messy and unprofessional. That's a UI problem. This is very different from the user experience status. So what can be used if the feature in combination with the provided on page tool tips are not intuitive for the user and the user, either you being the tester or you thinking or being in the mind of one of your users. So the key philosophy is that ideally whenever a user is presented with a button or a choice, she will only be confident enough to click it or to make the choice if he or she knows what to expect. So that is exactly why intuitive interfaces and good tool tips are important. So from this explanation, you can see that UI problems are pretty objective to establish if we just make abstraction of Internet Explorer versus Firefox versus Safari I mean there, there's a whole, a whole area of complexity or challenges if you really look for them but in general these UI problems are objective to establish while user experience issues are can be very subjective. So if you see that these space has a pretty specific or even a weird way to approach something, it might still make total sense to you, if you have already used these space in the past, but it might still be considered a user experience issue for someone who's new to these spaces. So this is why if you are new to these space to please please consider taking part in the test that on, because contrary to what you may think your testing and observations could be very very useful. So if you want to see a few more test statuses that I did discuss in detail on the previous slide, you can find all of them on the summary tab of the test plan, and you can also see both in absolute as well as in percentage how many tests are in which status, and further down the summary tab, you can also see statistics of tests per user persona or per feature category to to follow the progress of the test at all. The test plan only covers behavior and tests against the central test server demo seven dot these space dot org. As a participant, we also very much invite you to devote some attention to the these space seven installation and upgrade instructions during the test at home to report problems with those you can comment on the applicable wiki pages, you can raise the issues on the d space tech mailing list or the slack channels. This is especially useful if your institution is either on Microsoft Windows infrastructure or on Oracle databases, because they are less common setups among the d space seven developers. You will notice that even though the test plan has tests on authorization and authentication features, it does not really have specific security tests or penetration testing objectives. You are more than welcome at any point, not only during the test at home to test things like script injection SQL injection, or other related security proofing. But it is very important that potential vulnerabilities are treated confidentially. So this way, and this is how it's been historically in the past, the committers can fast track solutions for security issues and publicly disclose them after they've been fixed. Right now, there are already institutions running the space seven on publicly accessible test servers. So immediate public disclosures of such observations could put these installations at risk. I'm just swimming it's left. Yeah, great, because I'm, I have two more slides. So, throughout the test at home, there are different ways to touch base with other testers and these spaces and contributors. If you don't have additional questions you can just record your test feedback in the central test plan. If you see a test that somebody already executed you can re execute and then just you can override the the previous test observation so that's normal. I prefer email the d space tech mailing list is the most suitable one for questions observations or discussion, or on the do the space slack there is a channel dedicated to the test at home. And lastly, for people with deeper technical interest, please note that the actual issues emerging from the test at home will be further this discussed as issues on GitHub so no longer on the do the space G round. The main two are the main github repositories, the d space angular one that deals with all the front end related problems, while the rest API and back end problems are discussed in the issues on the main d space GitHub repository. If you don't know where to create an issue, don't worry. As a tester it's not, it's not your obligation someone else can create their issues for you, or you could reach out in the d space tech or slack channels to learn more. The only thing about the test at home is to have as many tests from the test plan tested, as well as the installation and upgrade instructions. So before opening it up for questions I will leave you with one last link. This page is the central page about the d space seven test at home where you can find all the links and information I gave you today, neatly organized together. As I said, I thank you for your attention and your participation in the d space seven test at home. Let's solve the chicken or egg dilemma for these days seven together. Thank you very much. It was great. These possessors on is tremendously important for the whole community for hold these days. You answered the first questions in the test plan I think months ago. What will you do next week. Great question. So right now, I'm not really worrying about next weekend. The main thing I'm worrying about is that we still have a range of features that are not yet very well covered in the test plan. So one example is the space entity so before thinking about next week, either today tomorrow or in the weekend I still hope to get some extra test descriptions into to increase the coverage. Next week, I will just be there together with everybody, trying to reproduce tests, filling in the test plan maybe following up on some good to be shoes and, and just fingers crossed that we don't discover very big showstoppers. So, we got one question just a link to your slides. I think it's more than answers in the question. So, I think it's as answered. Other any other questions. So, everybody's waiting for Monday to get started with it becoming up more tests and tests during the test, during the test as one so will you add the missing features was test is running test plan. It's possible. Yeah, so I mean definitely anybody is welcome to contribute more tests as as we go along on the demo server specifically team don't you just recently said that also I is available. But for example I'm not 100% sure for example for the sort or the sort V2 interface. So definitely it's great to get some testing on all of the interfaces not just the one for the human eyes. So yeah, I hope more tests can be added. So, our question. What's so is a test plan you you have the results and in case there's a result that noticed something not when went well somebody else is repeating the test comes to another result what to do. So, the result standing there that saying it's not working should say corrected to fix it. We have, we add more columns for different answers how to proceed. So, a hard agreement on whether or not deployments of new fixes will be done throughout the test that on in general, the space five beta should be the thing that is deployed and we should have, we should be testing on the same stable version throughout these whole three weeks. So, the first thing that comes up that like a major feature is broken due to some configuration issue like for example nobody can submit new items. Then of course the space committers will jump in and make some changes to the server anyway so that there can be a system throughout the test that on but basically the main assumption is that the more recent your observation, the, the more correct it is. So you can always override an older test status with a newer one, especially because Google spreadsheet has this versioning. I mean has this history functionality that you can that you can look back. But I would recommend to be very careful before you remove anything that somebody else wrote in the comments column because in the comments column somebody can say, this was the edge conditions that were at play when I did this. So then just make what you added in the comment column make it bigger or use the Google commenting feature to start a comment thread on a specific cell. I'm not sure. Just ask in one of the in either the mailing list or in the in the chat. I mean, we have histories we have backups of the test plan we will probably make a backup every day. So you should not be afraid to enter stuff and change things. So the fact that is generally a good, good idea to join. It's a lot of information about the space coming through. Love Peter over there. You can ask about almost anything regarding the space. Okay. Thank you very much for your presentation. Thank you very much for your time. Hope to see you soon again. Bye bye. Thank you. I will stay online and I will be in the in the team and tissue and then I will sign off. Have a nice day everybody.
|
Bram Luyten wirbt auf dem DSpace Anwendertreffen 2021 für eine Beteiligung am DSpace 7 Testathon.
|
10.5446/57557 (DOI)
|
This meeting is being recorded. We have two interesting talks in the first session today, both looking on analytics for dSpace-chris. The first one is reporting about analytics and reporting at different levels for a chris-based on dSpace. Looking at the use case of the Peruvian National Platform, it's presented by Andrea Bolini from 4Science. Welcome, Andrea. The stage is yours. Thank you, Pascal. So let me say I'm very happy to open the second day of this precious event that annually bring together the dSpace user of German-speaking country. I'm happy to share with you today our understanding of analytics and reporting needs, pieces of years of work on the chris implementation project. Today we will use one of our running projects, the implementation of the Peruvian National Platform with dSpace-chris as base for our narrative. So let me start with some context about the project. The Perucris platform has been funded by the World Bank with the aims to provide to the Peruvian citizen a modern information ecosystem in science, technology, and technological information to provide product value, accessibility, and development. The main goal is to develop an open interoperable and integrated national network that will be able to collect and organize all the information related to the Peruvian research activities, giving visibility and transforming them in knowledge so that decision makers and also interested parties can take valuable action to support the research. The expected benefits are many, mostly related to have timely statistics, reports on national research development activities, better monitoring and evaluation for public funding at national open access policy. Disseminate research results, analyze trends and impact, sharing and discovering innovative technologies, ideas, new markets, competitors, and partners. Better decision making on different levels, national, local, institutional, private sector, general population, as you see, all of these benefits are about analytics, understanding, extract value from your information from your data. So the two pie pillars of the project are sustainability and interoperability. Of course, there are many aspects that need to be accounted for that, but the key answer in our approach is provided by adoption of enterprise-grade open source project and use of agile methodology. Adopt open source technology requires careful scouting and evaluation of available solutions. Not the technicality, the governance and license model must be considered. The selected solution must be monitored and we need to be aware of the community mood. An active role is required for an effective use of the solution. For science is a lead contribution of this space and we are happy when we can contribute to other open source projects as well. The agile methodology allows us to govern to change instead of waste time fighting against that. And a so large project is obvious that change will arise. We provide high-quality software using agile methodology that can be improved step by step. Agile is an abuser word now day and for science we take it seriously in basing time in training and having several members of our staff certified as product owners, scammers, scammers. The architecture of Peruvian national platform is quite complex as you can imagine due to the scale of the project. It's an interoperable project. It's not a monolithic platform. It's built on top around this space that is at core of the project. But information are automatically collected from several sources. We extract, we receive information from institutional system that share this data with the central installation using standard such as high PMAH, service, using to open air Kailine for Chris manager. And so but also from some commercial and not commercial database. You see Skarpus, Hallishia, Crossref, PAMED and many, many other. Of course there are also some government sources of information like Renacit, Sunat, Sunedu and other Peruvian national database. On top of that, this is just Peruvian research data. But the research is something that work at the international scale and is inside a broad community where open data exists and need to be used to enrich your information to extract value. So what we have done for the analytic part that is the main focus of today presentation. We have introduced mainly three components. Open search that is a community driven open source search and analytics suite derived from elastic search but licensed with the Apache license. This because elastic is not longer an open source initiative. It's not adopting an open source license. And we were very careful about that. A technology partner experience in open source know that an in depth analysis understanding of underlying communities required to make sustainable design and avoid to expose the project to risk. And for this reason, we have from the start decided to stay away from locking traps in Kibana via the X pack and decided to use the open distro app license at start and later on move to open search when elastic had decided to change the license of the core analytics platform becoming not longer open source. The second component of our analytics solution is dream you another open source project that provide a data lake engine that create a semantic layer and support interactive queries. However, distributed data source to provide a visualization of these information. We have adopted a patch superset a modern data exploration and visualization platform. Here you see the flow of data in our open in our analytics component. The main formation come from this space creased they are synchronized to they are sent to open search with a QE mechanism that allow near real time synchronization if needed or overnight synchronization. In this step, data are preprocessed so that we are able to provide different view over same data in open search. We will see more about that later. Open search become one of the source of information for dream you for the data lake together with many other unknown source that exist into why. It's important to know that I'm talking about unknown source because the project is not to build an analytics over a set of data that we know in advance but we want to give to Peruvian government to freedom to join the data from the information system with any other source that could become available in future and open looking data and local database or appreciate that they will provide with additional information. This data lake created in dream you will be visualized and explored using superset. In a so large project and in general in any system there are different user and to project scale can be quite different. The national scale of Peruvian project will not apply to any other project. For this reason we are following approach of progressive enhancement of the solution from the functional perspective. Moving from a built-in support of analytics in this space, to analytics done powered by open search to a data lake engine that is powered by dream you and superset. What we mean for built-in levels. This space support have flexible search engine that allow that analytics and exploration result can be exported in configurable formats including CSV, Excel file, PDF and aggregation can be used to narrow the analysis and provide basic visualization. So here in the screen you see how on the Peruvian project we introduce graphical visualization for search result or for the whole database and we can provide different type of graphs such as pi, bar, line and so on. And this graph can be built on top of any aggregation dimension that you want to use on your data. So we are talking about publication type, involvement, institution, authors, years, dimension of your project and so on. This visualization can also be included at different level of the platform. So all the then in the search they can be included when you visualize the data about a specific project or a specific person. Data can be extracted from this space crease. You can of course use the rest of the PI but you can also very easily extract your data in an Excel format or CSV format or any other custom format that you are going to configure in the platform. So it's not only about local visualization. And all the search and export in this space crease are contextual based on the security of the user that access the system. So you can create reports about public data but also reserve the data. Which is the second level? The second level is the analytics at the on that has been engineered for in the Peruvian project but we are able now to offer as a generalized solution to any space crease installation just for a share of the initial cost of design. The analytics at Dorm provides self service capability. In this space crease you are able to configure many aspects. You can configure additional facets, additional filter, graph and so on. But if you change your mind you need to change the configuration. The analytics at Dorm provides you self service capability. You can change your reporting system without changing configuration without changing the code. Moreover, during the ingest of the data, the data are distracted and this is very important to provide easy analysis from different perspectives. For instance, it's quite common to focus on when you talk about publication you can look to the publication itself or you can look to the single contribution of an author in a publication. If you try to count how many publication an institution has, you there are some scenario where it's also important to know how many contribution to a specific publication, a specific department has provided or another department has provided. And it will be quite different if an author is the single author to publication or is an author in a large group where there are external author or other co-author in the same department and so on. It's also important to know that the analytics at Dorm can be accessed from external application as a normal SQL dataset so that you can create these tools also from Power BI or Tableau or even Excel. What these two provide? Provide you the option to create several dashboards. You see dashboards that are predefined for organization to analyze your organization unit, your person, your project publication. You see that for publication there are several view over this publication just to resume the previous argument. So you have analysis of publication as a whole, of single contribution view of the publication, a different timing or perspective of publication where you don't see publications associated with department by the mean of the affiliation stated in the publication metadata by the mean of the current affiliation of the researcher. You see a total number of different widgets can be arranged in any dashboard. This is an example of the project dashboard where you see the number of projects that are running in your institution, the distribution hover here of the project, the different involvement of different institutes in your project and also the economic value, the distribution of economic value of the project for different institutes in different scale of the project so that you know that the institute B have a larger number of very costly projects of very high family projects. For publication, another default dashboard that is provided allow you to see again the total number to analyze how many peer reviewed against not peer reviewed publication, your distribution type distribution, contribution by single institution institute to the scientific output, the keywords, the research area that are focused on your publication, distribution by author. You see different performance. You can use metrics like the scope of citation or website citation or other metrics that you compute at the institutional level for your publication to compare the performance of different institutes. Here you see an example based on a fake metric that we have generated over the average of the metric value for the publication and the median distribution of this metric for the publication. You can make some sophisticated statistics analysis of your data. Again, you can see how on a height map, how your publication are distributed over the scale of your evaluation. Again about evaluation is important that you can base your evaluation on existing bibliometrics but also on other metrics that the rule are defined at your institution so that you give different weight to different publication type or number of contribution or impact, social impact of the research output that you are able to track somewhere in the system. The dashboards are all interactive so you can click on any element and narrow your analysis to this specific element and go down up to the detail of raw data that contribute to these analysis such as the list of publication. The service capability is quite simple. You can edit your dashboard. You can rearrange the element just using drag and drop, resizing the element. You can create a new panel using the panel that you have already configured or create completely new visualization using a set of redefined widget that exists in the platform. Last layer of analytics. You see that the analytics are done provide you service capability and a lot of the data value but is still limited to what you know in advance. You start from the space crease data. You can enrich this data with a couple of external sources that you have identified in advance. Dreaming is a lake engine. This means that on the user interface your administrator are able to register on demand new data source and configure on the user interface the way that this data source need to be joined with your existing data to create a virtual dataset that can be created. So you can join your data with any external Sparkle and point, Excel file, database or S3 data and so on. And Apache superset is quite similar to what OpenDashboard, to former Kibana, can provide you in terms of widget tool, dashboard capability and so on. But they work on your virtual dataset. So it's not limited to only to this space crease data. And in this example you see that for the Peruvian project we have joined to crease data with demographical data that come from other national database to visualize normalize the data over to Peruvian country map of research impact activities. This is just another example of a dashboard in superset to show the capacity of the widget for patent and intellectual property in the Peruvian project. So thank you. I hope to get time for some quick questions. Thanks anyway. Yes, we have time and we already have a question for you. Do you cater to different partner organizations in the sense that you provide an adaptable data model, send you multi-tenary in this space crease or have provider map the data to your data model? Do you cater to different partners organizations in the sense that you provide an adaptable data model, send you multi-tenary in this space crease or have provider map the data to your data model? The question is also in the chat. Okay. Yeah, not sure to catch hold to the point. At the in the Peruvian project is some sort of multi tenancy because all institution have data isolated from the other when these data are contributed to the national database. But these data are later on aggregated at the national scale because of course there are collaboration among the institution and Consitech will provide editorial check of this data and normalization, the duplication of this data cross institution. At the analytics part, data are based on the Mellion self data model. But of course this can be extended is extended to meet the local need of institution so that extra information can still be processed by the analytics model and of course also more into data lake. Are there any more questions? So, I think we give you another minute for questions. And in the meantime, I might remember you on the great event and we had together at open repositories 2018, we were dancing as bears on the big stage in the ideas challenge. And if you don't want to see that you should come up with a question very soon. I'm afraid I scared everybody and we are thanks a lot for this great talk. And I'm sure we see each other today a little bit more later. Sure. Bye bye. Bye. Bye
|
4Science was awarded a contract from the Consejo Nacional de Ciencia, Tecnología e Innovación Tecnológica (Concytec) for the development of the National Platform #PeruCRIS, based on DSpace-CRIS and funded by the World Bank. In the context of the project, a very sophisticated solution was developed for the analytics and reporting functions. This solution provides DSpace-CRIS with a powerful set of tools for data analysis, reporting and visualization, based on a combination of state-of-art and open-source technologies OpenSearch, SuperSet and Dremio.
|
10.5446/50118 (DOI)
|
So, can you see my screen? You should see some sort of terminal. Yes, we do. Okay, so I guess we will start. No, I think you should wait for a minute or two longer. Okay. It's just four and one minute. No, people are late usually. Yeah, but I don't know about Victor, if he wants to join us or if he knows that he should join. Ah, Polyester read our mind. He mentioned that any pre-training chatter I will edit out, so feel free to socialize before. So, people, if... Say hi if you want. Let's hear your voices so we know we are not the only ones with a voice. So, hi everybody. Hello everybody. Yeah, Victor is here. Yeah, sorry, I'm busy in hearing, so I'm going back and forth and everything. The deployment training and there were so many cameras of people and I thought there was so cool to see. But I don't know if he was from Zoom or was it Meet, not Meet, GC, what was that? Maybe just Zoom and everybody had their video cameras on. Yeah, all the trainings are using Zoom, so Zoom everywhere here. In the end, I'm actually focused on the progress that we have done so far with the training and if possible and great if you can allocate some time at the end of the training. So, people who haven't participated in yesterday's training. Wait, wait, can you use the host? There's a lot of background noise. Hold on a second, or mute everybody, I don't know. Well, I think maybe Victor was in a crowded room. No, it was me. Sorry. No worries, but it's nice to hear that background noise because like this is just so, so, so quiet. And it's tough to deliver something without any human feedback. So yeah, I don't mind at all. And of course, it's good to hear those people, which are right now in Sorrento and probably having a really, really good time as a clone community who got together. But we... Here you go. Okay, cool. We always... We're all here, man. We miss you all. Well, unfortunately, we couldn't make it. Here's your video host. Okay. So... Zoom manager. Yeah, if I would know how to use Zoom, it would be great. I think I should learn more about it in any case. So yeah, we've gone through a good bit of a tutorial yesterday. We will go through the rest of it. And I think it will be a good idea to do some hands-on, like really, really... All those who are participating and who are willing to give us a try to... Let's do this together. Let's bootstrap a project, a Volta project if you haven't already done so. Let's bootstrap a new Volta add-on. And I don't know, if you want, we can think of new type of add-on that we can develop or we can even further develop this add-on. For example, we can think at the end on what cool features we can add and so on. I think that would be one good idea on how to solidify our Volta skills. And also, I think a good idea would be to have, let's say, a question and answering session at the end. Just not focused on anything specific, but if there's anything you want to know about Volta or about JavaScript developing, especially if you're a clone developer by trade, let's say, and maybe not so experienced with Volta, we can also do that. Okay, so... Yeah, and I was about to say something about, and ask me anything, session maybe. Yeah, so... Yeah, that's that one. So I think we have the same idea. Okay. But if you follow along and you do have a question, then do try to step in because perhaps you will forget if you ask in the end, it's maybe going to be answered otherwise. So if you do want to step in, then yeah, do it. I think Tiberio will appreciate it. In any case, I'm quite happy with the training yesterday. Just, and this is my impression because I've noticed from last year's training that they were published online in YouTube. And of course, when you're online, you no longer have... I mean, when it's recorded, you no longer have the possibility to participate and to actively ask questions and so on. Of course, it would be... If you watch that video at that point and you have some question, maybe somebody else had the same problem as you would have had or something like that, I might pick up those possible problems that somebody would encounter following this training or not. I don't know. So in case you see something that troubles you or you see something that you don't quite understand, you don't quite get, feel free to step in and... So Tiberio, if I can chime in one thing, a lesson I learned the hard way yesterday on the deployment training. If you start up a new photo project with a German generator or anything else, watch out that you're not using an older Node.js environment in which anything was installed. So I battled for an hour yesterday when it turned out I used an old NGM one, which had a Yeoman 3.0 something installed from half a year ago from an experiment. I was getting all these kind of strange of errors and when I just wiped it and did an NGM install from the latest LTS version of Node.js, all my problems went away and I was up and running in five minutes. So for other people, if you start up the process, always check on your system that you're using, that you didn't do any... Because we used to do global installs one or two years ago for tools. Don't make sure you're clean when you start up the environment. Yes indeed. I saw the discussion on the deployment training channel and it's a good point. And at some point yesterday, I've mentioned that you should always use NVM, so the Node version manager. And with that one, of course, it becomes easy to clear everything and to start from scratch. And usually, yeah, install new versions of Node and so on. But it's a good point about that one. And yeah, I think David also had another, let's say, mention that I should explain in our table schema, this one. But I said it can be an object, but it can be defined directly as an object, but we use a function to generate the schema just because we will need... When we generate the schema, we will need, for example, the Intel object and for example, the form data, maybe if we have a dynamic schema and something else. For example, we can use a schema as a base and then we will add to that schema. And we're going to do that in the course of this training today. But this Intel object, it is injected into all block components. And we can trace that injection process, how it happens. But basically, this one is a pure function. It's not a React component. It wouldn't have access. So inside it, I wouldn't be able to say const Intel is use Intel or whatever. I wouldn't be able to use a hook because this is not a real component. So instead, this is just a function and the Intel has to be injected or has to be provided by the block. And we did that basically because here in my left side, in the data table edit component, I have passed the props as arguments. And these props, this object gets deconstructed here. And from it, we only care about the Intel. So basically, I could say rest and the rest would contain the rest of my props. Or I could say just props and then whatever I have Intel, I need to say props.intl and so on. But like this is just more cleaner and easier and easy to see and so on. So if you want to see how that Intel object is injected, basically, we're looking at the data table at the block edit component. So if we go into a photo code, source components, manage blocks, maybe block edit. At some point, we can see probably here, this one. Okay, this is the block edit component. So blocks config. Yes, move the window a little bit to the center and bump up the font size a little bit, zoom in a little bit. Okay, yes. Okay, what about this one? I'll just close it. And like so. Okay, so blocks config, it is one branch of the VOLTO configuration registry. So we can see here that it is defined as config blocks config. This is the fallback because it's actually possible to pass a custom blocks config into the edit component, which is a really, really nice feature. And yeah, I'll explain a little bit later how you can use that. So the block component is blocks config by block type, the edit field in that object. So basically in our code, it would be this one, edit, data table edit. And that one is just the component that we were looking on, looking before. Okay, so this one, data table edit, it will be, so it becomes like, let's see, block, block, block, block this, right. And this one gets all the props from this parent component. So we can further trace down. So by doing so block and by passing this props, and we're looking here at a class-based component, where we are passing all of the current component props to this block component. And among those components, among those props will be inject intel, which injects the intel object. And that one allows strings to appear as translated inside the render output. One little tip and trick, let's say, because it is quite strange, let's say in JSX, when you have this, when you're rendering or you're using a component, first of all, you have to use the, let's say, JSX syntax, this sort of XML syntax, right. But you can, and you are absolutely forced to declare that component name in uppercase, because that's how React recognizes components. So for example, in case you pass down a component as a prop into another component, so let's say that in this component, I would receive a component called, I don't know, let's say, header component, right, it would be a prop. If I want to use that component somehow, I need to do something like const header component is header component, and then use that component, like so, header component, and then, yeah, select it. I can just pass whatever is here. And here comes the strange part. What do you do, for example, if you want to have a dynamic component, something that's rendered as h1, let's say, or headline one, or headline two, or whatever you want, you can actually do something like this. So you can declare a constant with this, that's just a string, that's just the tag name, and that's just like this, and this would actually work. So you could have something like, I don't know, some text here, and that would work. Yeah, React is strange, but strange and nice, I guess, I think. Okay, so getting back to where we left last time, sorry for switching screens, I'm just, yeah. Let's, let's, let's, let's, Tb, move the window to the left because, like, like so? Yes. Okay. You're kidding me. No, if you have the chat open, then the video will overlap with the browser. Okay. So this is safer. Okay. Is this, is this fine if I leave it like this? Okay. I will assume yes. So let's, let's quickly go through what we have covered yesterday. So we are at chapter five. I didn't go, I didn't, I didn't really go into the overview of Volto, because I think that's pretty familiar. We did the bootstrap. We bootstrapped the add-on. Briefly talked about Mrs. Dev. Then briefly talked about the necessity of adding add-ons as a workspace in case we develop the add-on and it has first party dependencies. And then, and then, and then loading the add-on. And that's needed because Volto needs to know when it should load the, when it should execute the loading function of an add-on. The add-on possibilities. And we've mentioned that they, they have absolute power over Volto, just like in Plone and just like the Volto project. We covered a little bit the add-on, the, the way the configuration loads. And that is Volto declares first the configuration. Then each add-on that, that is being loaded by Volto has a chance to modify that configuration. The add-ons have a resolution order. So an add-on can declare that it loads another add-on and there will be a dependency graph created with this that will be resolved. Then all of these add-on profile loaders will, will execute. At the end, the project will be the last one that, that has a chance to come up after the add-ons and fix that configuration. We've developed a basic block and that's like really, really simple process. And, and, and then just build on top of that to, to add new functionality. First, first it was the edit field, the possibility to link one, one content that, that was already uploaded in Volto in Plone. So we, we were able to create a link to that content. And then we used a little bit of less to, to style the block. We looked at how to make a fallback. So we, in case, in case the block is just created, it's empty, how, how we can make it more user friendly. And just to remind you that is this feature, if I don't have a file, then I, I have this default kind of view, but just, instead of just having some text like no value or, or something like that, right, we are basically providing a minimal interface to edit that block. Then we looked at how to fetch data for the block. And we've created an action and reducer pair. We've gone a little bit through detail on how Volto API machinery fetches content and, and how you can create new actions to fetch content, or you can use Volto's existing actions to get content. I should mention here, because it's really interested, interesting here, we have getrock content, we use it because I'm not 100% sure right now why I had to do this. But it is needed. But Volto has another action already inside it, but that is able to fetch. All right, I remember. Volto's default get content action tries to fetch data, but it loads it as JSON. And we don't want that because we have text data, it's a CSV file. Okay, so, but in case you develop new endpoints, new, new features for your add-ons, you can use that get, or get content, or yeah, in case you want to fetch content for one item from Plone that you know that the path for, you can use get content. Get content, so that's the point I'm trying to make. Get point, get get point, get content is the main request for Volto. So try not to override it. Try not to override it, because you're, it will be like this. So if we go to source actions content, and we go here into get content, it has this subrequest argument. And in case if you don't use it and you trigger get content, whatever content, whatever data is fetched from that get, from that get content action will override the whole content of the page, which probably you don't want. So to be able to fetch additional content, additional data, you have to pass the subrequest here. Okay, then, but it's all here in the tutorial. We've registered the reducer. We looked at the redux of the developer extension. Then we've plugged that data into a block and display it as a table. We've gone in and restructured the code to split the data loading code into HLC, a higher order component. And we've already explained that these ones are pretty much like Python decorators. We can consider them that, but they are on their own. They are kind of like a React component. That just wraps another component. Okay, so let's see what else we did. And we've started to look at how we can use a client-side schemas to quickly build the settings sidebar for blocks. Because that's a lot more easier to develop, to maintain along the future and so on. And of course, there's the widget resolution mechanism that is used by Volto. And this one has to accommodate blown REST API dexterity schemas and also client-side schemas. But there's a ton of ways that you can make sure that your own widget, the one that you want for a particular field is used. And it can be really easy because you see these two have the highest importance by field ID. So if you have a widget associated with a particular field ID, it's going to be the first one that's found. And then because this is like a Boolean operation, it's just going to stop here. And the second one is by widget name. So if you put widget just like here, if you put a particular widget name in the schema, it's going to have a really high importance and it's going to be just picked up. So yeah, making sure that your particular widget is the one that will be used in the schema rendering, it's easy. What else did we do? And yeah, we've defined this schema. I've explained why it has to be a function. And because it's a function, we're going to abuse it later on so that we have a dynamic schema. We're just going to change it from our block. So what else do we have? And yeah, we've formatted that table. And because of our table and because of our schema and the way we've structured the information, it's really easy. And we made the comparison with the table component right now that sits in Volta. And that one should be refactored. But that one is developed in the old style, which is a lot more explicit. And this one is a lot simpler and shorter. Okay, then we did yet another hook, which for the purposes of this tutorial could have been avoided, but I really, really think that that is the way to go. That is how when developing add-ons, that's how you should also structure your code. Try to put the rendering part of the component in a component and a lot of reusable logic separated into something else. Another component and HLC, a hook or anything, but just separate it. And there is a recent block that we've contributed to Volta, the search block, which can serve as a sort of more modern style of programming for Volta. And that one also uses the same pattern of splitting the logic into into hooks and then just leaving the presentation parts to the JSX component. Okay, then we just split it up, everything. Here we are using inline form. And I think they should be mentioned. The inline form is just more or less a schema renderer. And I think, I don't know, Victor, maybe you can confirm or not. At some point I saw that there is a schema renderer in Volta or in your code at the KIP concept. Were you using it in the same manner or? Yes, but we deprecated it. So we are using always the blocks data form in fact. So the inline form is the basic schema renderer, but for blocks we should actually use blocks data form. For this tutorial, I didn't want to go into directly using blocks data form. Just let's build step by step. We will use it later on. But the idea is inline form is the schema run, the most basic form. And we also have another flavor of a form that is integrated with blocks. And that one basically it can allow add-ons to further offer variations to a block. So for example, if I'm going to look into Volta into the configuration into blocks. And I will look for variation. Like so the listing block, because I'm looking at the configuration for a listing block, the listing block has this variations list array where it lists all the possible variations. And these act as templates. For the block. And so in my code to use the block data form, what I should do is basically instead of inline form, I should use block data form. And that should be it. But that should be the only change required. And that one will basically allow in the editing, it will allow to have the variations drop down. We would still have to integrate the view part with the variations because a variation, like if I go to VoltaQ concept, just to quickly show. Okay, a variation also means a particular type of view. So we would also have to integrate the view part of that variation, which is yet easy to do. And maybe we will go through that part later on. Okay, back to this and we are finally up to date. Let's say we catch up with what we did yesterday. And today we're going to do something which is really, really satisfying, which is to add customization possibilities to our data block. And because of the VoltaQ, sorry, because of the object list widget, and I will explain what exactly we are going to do. And then we will explain what exactly means object list widget. Because of that widget, we are able to very easily add customization. And for that one, you can see it here. I need to check what code I'm running. But this code shows you what we are trying to achieve. So basically I can add multiple column definitions. I can move them like so. And in each column, I can, for example, define which column to show. So in case I, from that CSV file, I don't want to show all the columns. I can define which column to show. And if I can pick, yeah, I will pick this column. Then we can have options for that particular column. And we can also choose a column template or a column variation. So we can, and you can see that there is this extra field that's being added into our form, depending on which column template we have chosen. So, yeah, and this is what we will achieve and we should, we will take a look at how to do that. This widget, I think I've mentioned, but this widget is the equivalent of the data grid, data grid table, I think, or data grid field in, yeah, data grid field in Plone. But this one is integrated in Volto. Okey dokey. So let me just quickly check exactly what I'm running. Okey. Okay. Sorry. Okay, so, yeah, my, my version of the code was two way too much ahead. So now I'm back to the current status of the training. So, I've shown before what what the column would look like when rendered so basically we want to, we want for each of these CSV, we each of these columns that come from our CSV file, we want to be able to have those formatting options. And to do that, we will use the object list widget. And what how to understand that because we, we also have the object browser widget in Volto, how to understand that is, it's a JavaScript object widget right. So, when, when we have a scheme, a schema, for example, this, this schema when we have it what we use it to render a JavaScript object that Jason let's say, right. And our object list widget will, we will have a list that has JavaScript objects with whatever information. And for each of each of those items in that array, we will have a schema that will that we will render and we will use inside the widget to enter that to edit that information. Okay, so, in the tutorial. We have this new schema. I'm going to copy and put in here. JavaScript scope is, I think, actually, I don't think I'm knowledgeable enough about discussing JavaScript variable scope or JavaScript naming scope but surface to say that we can put this column schema, either, either, either before the table schema or after it doesn't matter. We will reference it from the table schema because we will add a new field called columns. And that field has to be defined. We can define the define it like so. And I'm just gonna get that definition from the tutorial. This is a new widget. We have title description, which is the usual things we we say that the object is object list. And we need to pass a new, a new prop that is the schema. A prop, but he will be used by widget. And actually, I didn't realize this at first, but Victor told me at some point, you can actually put anything as a prop here. Like, I don't know, let's say, class name, right. So it will be passed to the, to as a prop to the field, and then as a prop to the widget. So if there are widgets that, that received certain props, and those props, you have, if you have access to them in a schema like that. You can pass them as a prop to both widget, and you can influence what you put it does and for example here, the object browser widget, we have this mode link, which is a prop that go that is important for the object browser. Widget and that one for example, just to quickly show what it does. Sorry, I keep forgetting. Yeah. Basically, it, you see now I have again, the button, because if I don't have mode link past to be passed to the widget, then that widget is multiple file picker, it's not a single file picker. So back in my schema, I will just change it back and reload. And yeah, now you see it's a single file picker. When I have a value, I have the X to clear. Okay. So, now that I have added the columns, right, I have added here in the properties here in the field set. Don't forget, don't forget to add it to the field set, because if you're, I tend to forget. I get confused why, why did I add a field but it doesn't appear. Okay, so let's look a little bit at the columns. It's pretty basic. Pretty simple, something weird. Let's say we have the title, it's a text. We, we just we have the text template again text widget. We have the, the text align, which is a choice. And this one basically will create a single select widget with these choices. And this one, this one is I think value label so I think it should be kind of like this. And then we have the column, which again is a choice, but we don't have, we don't have it populated, we don't know what, what, what actual columns what actual choices we have here. So that information only leaves in the block, we will have to look at the block to for that information, or, and basically we will have to get the schema, and then change it, and then pass it to the inline form to the, to the form renderer. So that process we call schema enhancing. It also exists for in other add ons as schema extending this, this object list widget also has an extensibility mechanism for, for the inner objects. And it is called schema extender. So, yeah, there's multiple names for it, but they all do the same thing which is take a JSON that represents the schema and change it. If you've done, if you have done this with the dexterity with Z3C forms or something in the back end, you know how difficult it is to have a dynamic schema here in the front end. It's really easy to just tweak that schema because it's a JavaScript object. And, okay. So we've added and yeah, now. Let's look at how, how the page looks like. Yeah. So we have, we have the scheme of the columns widget, but the data column is empty, right. So we have to populate that one. And with, with information that comes from the data. And we have that information in the edit. We should have, if not, we have to, we have to refactor but Okay, so I'll take this little bit of code and I will explain what it does. I'll put that here. So we are in the data table edit. We have to I'll just close and simplify a little bit. So that the table edit is in, in our vote. The vote data table. Second, just to show it again. We are in the Volta data table tutorial source data table data, data table edit file. That's the file we are editing. So we have, we need to have the file data. We don't actually have that file data. Right now in the edit block. If we look at our data table component, we can see that we're wrapping the component with this hook with block data source. But this one only adds the, the initial file picker widget. It doesn't, it doesn't connect our, our block with the data. For that connection we need to use the other hook which is the with file data hook. Right. So this one, it's just going to look at the, at the data is going to basically inspect the props that our component receives. So it should be pretty safe to also import it with file data. And now I can put it like this. Okay. So, like here. And I need to, I need to. So now file data will be injected by the hook, and it will come here, the props. Like so. And let's, let's let's console the file data to look at it. Okay, so it's here. Now, of course, we can use the components tab. And if I look, if I try to search for the data table view, no, the data table edit component, maybe I will just search it here data table edit. Okay, so this one, the data table edit. We can inspect the props here. And we can see that we have file data. And this one is the array where each member of that array is an object key value, right. So header, and header cell name and the value. And we also have the meta with the fields. And basically we are interested in this part, right. So file data, meta fields, that's what we're interested. Okay, so if we look at this code file data meta fields, or an empty array, right, because we want to simplify the code. So we sort it, and then basically we map it for each simple string, like, like, like country, we are interested in an array with a pair key and value key and label sorry. That's why we are mapping with this function that receives the field name and basically returns this array with the field name repeated. And now, with that, with that array, we can go and and basically change or mutate the schema. And it's a it's a weird path, but we can actually track it just to see how exactly we arrived at that path and how because it's pretty straightforward. Right, so let me go to the schema. On my on my right side I have the schema right. So, who is the schema schema is stable schema. Okay, so in table schema, we will have properties columns. And that is this part here. And in columns is going to have another property called schema, which will be this one. And that object will have another property called properties. So basically, that the schema object is this one we have to go inside it, and that is another object with properties, which is this and then inside it it has a column object, which is this. And it has choices. So that's our empty choices. And we left it empty because here we don't have. We don't have the data, but in here we, we have now. We could have put this code. And this bit this bit of code, we could have also pasted to table schema. So in the table schema because it's a function. Basically, we could have done the same, let's say, constant basic schema is like so, or, or rather, you know, let's say, we have some choices, right, if we pass them as a prop. And then we just pass them here, column choices, and then here instead of choices we say that is column choices, and we need to also get it like so. So, to turn now, I could just do this, where I'm just going to add a new, a new prop column choices is choices. Okay, so now, now the change I've done is basically, again, I have to calculate somewhere, what those choices are I could do it here or I could do it in, in the schema but somewhere it needs to be done. But now instead of just having this drill down into into the schema, we can just pass, pass these options. Let's see if it still works. Okay, reloading. And voila, now we have the columns as available choices. Okay. So good. All right. It's a good idea. And I glossed over it. Here. And we will probably add more hawks and, and, but you, you also saw me hesitating when I, when I had to like close the parents here, because I wasn't very sure what should I pass as an argument to this function and so on. The code is actually less readable because of this because we have this function that gets an argument the result of this function, which again it's its own basic Hock constructor. So, to avoid this, we can use technique from functional programming, which is, which is the compose and that one we import from redox. And with that one, we can do like this. So we have compose. And with with this one with the result of the compose we wrap our data table edit. And basically here in the compose we have to pass the callables. But this time they will be listed as a flat list not we won't have to to chain them in function calls. So I can do like so. So I can put this one here. You raise this part. So now it is more readable because it is going to be just like a flat list it's going to be obvious what what we are executing. And it's if I go to my edit page here. It's going to still work. Right. I still have the choices as options. If I if I just comment this and go here. Right. I should not have them anymore. So it gets applied. Okay. Question or comments so far. I didn't check. Moving on. We should I think we should. Here, the file data is changed to to get the this get file path. So that it's flexible but we can skip this part I think we might get into some troubles later on but we'll fix them along the way. Okay, so now the current status is like this. We are able to declare these columns but nothing happens when I pick a column here because they are not actually being used in the columns in the table view. So we need to we need to put that part in. And we do that. So basically in data table view. Let's open it. The table view. We have to. We will get a columns of these columns. And then they will come from from data because they are saved as part of the data, the data of the block. And then I will I'll just copy the stable and then we can go through it because it's really intricate and I don't want to to mess it. Okay. Like so. And, and we don't, we no longer need the fields. So they are, we, we don't iterate over all the fields that are defined in the file, but we have to iterate over the defined columns the ones that we picked in our block. And if we don't have them so right we are checking here but if we have data columns which might be an array and that array has to be bigger than I mean it has to has to have members inside. So if we have that we're going to use that. But otherwise we're we're just going to basically use the defined fields that comes from the CSV file. Okay. So let's go to the browser. And it looks like it's already using it. So if I pick something else automatically updated and so on. And that's pretty cool. Now, in, in the column schema, right. I also added the header we might want to customize that header instead of instead of one that comes from the file. There's also, sorry. There's also this text template, which would allow us to customize a little bit how the value is displayed in the table. And that template, templates template says, add suffix prefix to text, use curly braces for value placeholder. So I can do something like so, but if I that that curly braces will be my value. I can put them either before or after. So that's like a simple method just to have some customizability to this. And there's also this align, which is the ones that those three choices so left, center, and right. And if we look at the code that does this basically it's, it's all here incredibly, I would say. So the column right, we have the tech of a header cell, and that one text align, basically when we have the column that we iterate over we're just going to use the text align field of that column. And then text align basically is going to be the value that is provided from this field defined in the schema. And let's actually I think it will be interesting to look at the data that's produced by the block by the object widget. And I'm just going to add a few more more of these so let's see. Nothing here but we can say here countries, for example. And we will say that it's on center and search for data table edit here. And we will look at the data at the columns, because this one is the interesting one. So the columns here, as I have said, it's a, it's an array. We have two, two objects inside. So each object, each of these objects corresponds to one of this information here. And then the first object, it has an ID because yeah we want to manipulate these in and we want to be able to to change their order and so that makes it easier. And then all the all the items inside that object, we're going to be the ones that are defined here in the schema. So these ones. Right. So title text template text align right there going to be here text align title and the column which is the one that come back. That is the field name. Okay. So moving on. And in the meantime, I mean, Victor Victor made this object this object with you get to look really nice. I love I love especially this shadow underneath that that makes it look like a stack of things. Really cool. Because that's like an older screenshot. Yeah. I've talked a little bit yesterday about the necessity to write new widgets and I think it's, it's easy. And but it's also important to write new widgets and to know how to write them. I've also stressed at that point that we shouldn't use state in the widget. And I'm just going to do that again because I think it's really really important to remember that. So, I'll take this code and we can then look at what it does. Basically, we have to create a new widget. A widget is a react component. And we can create this text, a line that just six file. And so basically now in my add on in source, I will have a new folder called widgets inside it, I have a new file text align. I'll just close right side not to bother us. And I will paste what I have inside. Now, yesterday I said, the most important props for a widget are value and on change that's, I mean, of course the idea as well. Those are the important beats, because the value tells what we show in the widget and the on change, we have to call whenever we we need to take that value inside the widget. The react react has this way of creating input components and it differentiate between uncontrolled input and controlled inputs and although we almost always have controlled inputs, the ones that our value comes from the parent and to to update that value, we don't change it inside our component but we change the we call the on change callback that we get passed from the parent. And that one will give us back our mutated value. It's a little bit like a, like a cycle like a circle. And it seems not straightforward. And for some people it's frustrating that the react doesn't have two way data binding because, for example, a Vue.js and Delta have that two way data binding. But I can tell you this, I think this this limitation is actually a big plus for react. And it's, it's, it becomes a mess because as a programmer you always tend to not to be very disciplined about it. And you will some at some point cut corners you're saying okay I'm just going to use two way that I've been in here, and it's just going to accumulate and accumulate and and then at some point you will see exactly what state you have inside your application and the application gets so complicated and so complex and the, the number of interactions is so big on the screen that it becomes a mess with react and the fact that we are always we we have this, for example, centralized data store and we're passing we're passing down this state we have a subscription for our components to be updated when the state changes. It's, it is, I think one of the key points to the photo existence existence right now because it's simplified and it made it possible to build Volta was a complex application. Right, so few people, right. Okay, so, um, back back to our widget component. It's going to be basically what we want is more or less the align buttons from work like left right and center center and justify those those paragraph text alignment buttons, and we will show them in a button group. And that one is the component from some anti qi. So if I go to button, you see that. So I'm in the react cement. Sorry, in a react semantic UI website. Not semantic UI, reaction Monday QI. And there is this button group, but let's see it in action probably here groups group. Basically, this one we want to build as a widget right because if you think about it. It is a widget. And this regarding the fact that there are four buttons here, it's a single widget, it has a single value, which is the type of align and that value can we've we've expressed that value so far. Sorry, with a choice, right. So a line for us is a single choice. Okay, so to write to write the week, the this widget, we have the simple div. We have this array that's defined here above, and we iterate over that and then we're going to just say okay have a button inside it. And I don't know why it's like this, I think it should be actually like so. Let's test it. Um, it really important, we should wrap our widget inside this form field wrapper. And that one take basically takes care of the integration with the form library so basically, it adds the label, we don't want to care about that one. It makes this greed, the fact that we have, if we look at this component, we have this greed system with the four white column at the left and then the eight white column with the input at the right. So we don't want to have to deal with that we just want our specific stuff right. We want this. And what do we do, we have mentioned we have a value that's coming as a prop from the parent and the on change callback. And we have a bunch of buttons when we click on them, we wanted to call on change, and who changes this widget this field, we save this by adding ID here, and then the name is going to be basically this one. That's coming from the value map here. So it's going to be either left right. Okay, so basically we're saying on change text align left on change text align right, whatever. Okay, so now that we have this widget component we have to plug it in. And we're going to go here and they then say config widgets, which it, let's say text align, I think it will be is equal to text line widget and we need to import, import text align widget from widgets. Text line, text line, like so. So text align here is going to correspond to this name here. So basically, we have widgets. And that is the folder, the parent folder and we're exporting, we are importing the default export. So we look back into the widget file, right, export default, right, so we are exporting this component as the default, the export. Okay, so now that we've done that, nothing should have changed in the browser. Right. I don't know why it does that sorry about that. Okay, so back in my this nothing changed that's that's because we have to go back to the schema and say that we want to use that widget. So now I will go back to the schema here. So I'm in, in the schema file of data table. You see, I have this commented, I will basically switch them. Now we are saying I want to use the widget. That's called text align. Save it, go, go to the browser. And I will reload. Go to the other table, add a column and yes, we will have the widgets let's test it. And does it work, it still continues to work. So, yeah, quite simple and we get, we get a friendly widget. Instead of what we had before, let me close some of these questions or comments so far. If not, I will continue. Okay. So, you saw in the beginning but we have that columns. This one that we were basically able to cheat to pick between different templates. So we want to make, we want to make some, some new renders available right. So, for example, I was, I was able to pick a column that was in numbers up to 100. And I was able to use the progress component that comes from a semantic UI. So let's see progress here. And this is this one right. So if we are able to, to have that number, we can display the value for that column as a progress. What do we have to pass the value together this will calculate the percent. For you, I don't know what that means. How, how much it should be. That value percent is okay so we have to pass percent. So percent. Okay, and that's a number. Yeah, pretty straightforward. So, let's get into this part a little bit later. Let's take first this, this component, the progress and this one. So, it's up to us where we put it, but basically, if we want to say, I have this blog and it's extensible and you can add new types of templates for the column, how to render, and I'm going to have my colleagues or in the future I myself will work on it so I will make a folder dedicated to those templates and because we render cell, we're just going to call them cell renderer. So, here in my, my add on in the source folder, right. Let me close all the other ones. I will renderer, cell renderer, double error or not, I don't know. Just a simple one, good. And inside it, I will add the first component. And I will add the code. Okay. So, here's a little tip, let's say. My component was called progress, right. So, but the progress also comes from Semantiki. So I'm just going to analyze it as UI progress. So that when I use it here, there's no no clash in names because I'm redefining the progress component. And what what's this one here just just going to take a value is just kind of calculate and will express express it as a percentage. So, let's say this one. And we'll, we'll try to just plug this one right now and make it the use in table edit. Now, we have this extension mechanism, we have to, we have to have, we have to define a way for that extension mechanism to be, you know, coded. And because we have to come up with an API, some way to say, okay, this block has this type of available cell renderers and maybe in the future other people will be able to add to that configuration with and push new cell renderers or or they'll be able to override one of these cell renderers if they want because that's just a simple JavaScript object. Okay, so back in the back in the index.js of the bolt of the table tutorial add on in the part where we have the config, we're going to add a new section. It's going to be like so. So, let me just comment this one right now. So, because this object that configures the block, it's something that's, that's open I mean nobody stops us to add new things inside it. We're going to define a new thing, a seller renderer ski, and inside it we will add objects and those objects will be ID title. So basically, type the title for the cell renderer, and we need also the component that will be used when rendering that cell. So we need to import the component import. Just, just, it's a good idea for example, to create index.js files in folders where you have a lot of a lot of other components. So, to progress cell renderer from cell renderer. And here, because cell renderer doesn't exist. And ESLint is able to tell me that I need to add the index.js. There is no difference between, between, between JS files and JS6 files. It's just a matter of convention. They are treated the same in Polto. But usually with the JS6 we put React code, JS6 code, and the JS file we usually just have, let's say, non JS6 code. And, yeah. Here we will say export progress renderer from progress. And what did I, why did I do that? So basically, I'm kind of renaming that export. So I'm exporting the default from progress. And I'm exporting it as a named export here in this file. And I'm exporting it as, as a progress cell renderer. And actually, I think this is done by some fancy Babel plugin, because I, in some other projects I had to do something like this. I export the default as progress renderer. And we, we have that one coming from progress. Okay, so now if we, let me close this one, if we go back here, no longer complaining, I can save, it's auto-formatted. And I should be everything fine. Now, I have this self renderers new thing. We have to plug it into the, into the schema. And if we think about it, it's going to be, this one will be a little bit tricky, because what we want actually is not to change this schema, that is the schema for the block. We want to change the schema for one of these objects inside, inside this list. And that is done. So that is done with some helpers inside the object list widget. We'll see immediately. Okay. So, we have to define the renderer, right? So basically, we will go back to our schema. And in the column schema, we have to add this new field called renderer. We have to, we have to add it here. And because basically at some point, we will replace the text template as a separate renderer template. I'll just get rid of this one. And we can also get comment this one. And we need this one. And I'll drop it in here. In the table, that table edit. And we, at some point, we will also check what it does. Yeah, we need to also import the config. The config is, is accessible as a, as a global import, which is really nice. And basically, by the time, by the time this code executes, this config object will be resolved and it will, it will have all the configuration already applied, because basically this config object. This config object here is the one that is also passed here. And it's being mutated here, right? But by the time, by the time this code is executed, the configuration will be, let's say finalized. So now this quick schema, let's see how we use that one. Okay. So it appears that we should, we should replace this, this line with this one. Let's see table pro, we can, we can replace this part with the one below. Like so. Okay, so now we have a separate function. It, it gets a schema as an argument. And that schema is the one that we already built here. And because in that quick schema function, which will look right away, we're going to need the data on the file that we passed, we passed them as arguments. Now, if we look at what this week schema does, let's arrange a little bit. And we can, we can, we can, yeah, we can put the files side by side. We'll go to split the screen and we can look at the schema. Okay, so the columns field, we say that it's schema properties columns, right? So the column scheme. So table schema, that's the schema. Properties and columns. So basically, this one. That's the columns field here. Then we say column schema, it's basically this object. We have it here. Now, we're going to get the columns from the file data. The ones that we have passed here. And we will. Yeah. Okay, so this part was we've already let's say optimize that one by, by doing this part here. So maybe we can, we can drop this part. Let's see. Yeah. And then we can safely drop that part. And then we look for the defined cell renders, the ones that we we defined now blocks config configuration, the one that me, let me open it. Here. So I'm looking on my right side. So everything, everything here, right, is the index dot JS of the add on. And in that file, I'm defining the data table. Configure the block configuration for the data table block. We have the cell renders inside. And in that cell renders object, we have some, some templates. Let's call them like, like so. So basically, basically, that's an object. So this one. So we have to define the object to be able to make that into a selection drop down. We have to iterate over the keys. And we have to define a list with two items, key and value. So it would be object keys of cell renderer. That's the JavaScript method to iterate over the keys of an object. And then we take each key and we return on two items array. First item is the key and the second item is the title key label. And that's, that's what we need to, to define the choices here. And now, basically, in this one in the columns field. It would be, it would be here. Here. This one would, it would have in principle, a new prop and this one, this prop will be. Check columns field. Yeah. No, not this one. Here. Here in the, in this one. This prop would be used by, by the object, the least widget, and it will be called for each of the objects inside that array. Each of the schemas instances, let's, let's say, inside that array. So basically, and that's a little bit tricky. Because I remember when I developed it, it was a little bit tricky. Because if you have multiple columns, like so, you don't want to mutate this, mutate the schema you want to mutate a copy of that schema because if I, if I will pick, for example, the text template is going to add a new field, but it needs to add it. And this particular object for the schema used by this particular object because this one will have something else. Right. So, this schema extender, it's going to be written in here, and it will be defined as a function that gets the schema and that's the inner object schema. And it gets the data. And we will look up the extension and. Okay, that's a that's another mechanism. And we will use the extension defined by, by the cell renderer. The text template has that extension. So if the cell renderer defines an extension, we will call that extension with the schema, otherwise we, we return the schema unchanged and that schema will be this column schema. So let's take the code for the text template renderer because that one has the schema extender defined. And that one is a little bit more complex because it adds the Okay, so we have text template. Not just six. So what do we do on. We need, we need the renderer. And that is, we have defined it as a simple property, it gets the column name, it gets value. Actually not the column name but the column data, the column definition. And basically, we either use the text template field from that column definition, or we return the value. And because of this, this function is just script. It's a JavaScript function. This is JavaScript, we are able to add a new field inside that object. And that's just for simplification, it could have been, I could have been a separate export like this. And that wouldn't, that wouldn't matter but if we keep it like so, then we only need to import text template renderer and the schema extender from here. And we have a simple reference to that. And I'll show you how, how to use, oh yeah, I already have a code here. So back in, back in the index, no, not this one. Back in the index file. So that's the main index file for the add on, if I activate this one. I'm going to have to also import the text template renderer. You see that I'm, I'm defining now the view, the, the, the component that will render my column value. But I'm also defining the schema extender, which is the function defined in the text template renderer that will extend the schema used for this column. And this, that's why I said that we will, we will only use this one. The fact that it's added as a property to this text template renderer function, just as a shortcut not to have to import yet another export and you can imagine that you will have maybe multiple schema extenders and they are going to have the same name or you will have to, you know, to you will have naming issues basically. And as Victor says that it's one of the hardest problems. It's just easier to avoid. And now I have to go back into the index JS and then export text template renderer. Sorry. I have this mic in front of me. I don't see the key where I saw text template renderer from text template renderer like so. No, it's called text template. Okay. I'm just going to close that one. And then now we can look exactly what the schema extender does. Okay, so this one in the text template renderer, the schema extender. We will get the schema. It will get the data. And basically we will that data will be the data for this column. So, if renderer if the chosen renderer is not text template or we're just going to refuse to do anything else except filter schema. But otherwise we have to clone it because as I said, we could have multiple, multiple columns. And we are dealing with JavaScript objects. So if we are mutating we are in the, in the danger of, you know, affecting something that is not belonging to this column definition. So that's why we are cloning the schema. We, we use clone deep from low dash for that. And then we are going to go and just going to add this new text field and we're going to push it to the field set. Okay. And now, I don't know. When we wrote this tutorial this the schema based editing and so on they were quite young quite new involved. In the meantime, we've added an extension field an extension mechanism to blocks. It wouldn't directly apply to this type of thing that we're building because that extension mechanism is for that variation because we're actually building a variation that for us for a column. The variation mechanism provided by Volto is dedicated to blocks to, to the bigger data to the block data not to one of one of the smaller things inside that block. But maybe we can think of restructuring this code and making it more in tune with with what we have in Volto right now. Maybe, yeah, that's a that's a reason to participate in next year's training Volto add-ons who knows. Maybe we'll, we'll have time to prepare that. Okay. Um, Okay, this one of this little bit should be updated. Maybe David you can make a note, because it has a reference to flat object least which which is the old name for the object list widget. Okay, so moving on. Now we have the format. It is able to to toggle between the different types of templates. And if I pick the text template, you see that basically it mutates the schema. And it is able to do so. So basically, you see this schema here is not mutated. This one is mutated. It has a new field. So that mutation has been done by this schema extender function. This one is the one that added this new text field, text template. And this schema extender function. So this one here. It's going to be the one that is defined here. So in the columns field, the schema extender. It's going to be a function that looks up the self renderer if it has defined the schema extender. And if that schema extender is defined based on the data for the renderer that's chosen. It's going to execute that extender. So basically, yeah. Homework. Yeah, you can take this code and pick it apart at your leisure. I'm sorry, but the result is quite impressive. The fact that we are able to have this dynamic schemas and we are able to quickly define how we do things. And we are using this sort of code. We are using this sort of mechanism in a lot of our client projects. And they have allowed us to really, really quickly move and do all sorts of things. And for example, let me show you from one of our demo websites. Because it's, yeah, well, I'm not sure. Yeah, hold on because it will work. I'm just going to update in the process of updating to Volto 14. And that's that's one six. So if I add the page here, and inside that page I will add maybe here it's going to be this block, the image cards, and in the display. It doesn't have a lot. But you can see, for example, that we use this, this image, this object list widget. And we've also used it, I think, for, you know, the biodiversity website in several places. Let me pick a country here. For example, this sort of data table, it would, it would allow us to pick the counts that we want. And, yeah, like here as well, it uses a similar mechanism. And basically everywhere. So trust me, it makes it makes for great use cases. Okay, now we have to, to, we have the editing we are able to pick between the different templates and we are able to have the text templates mechanism and this one continues to work, which is great. But we if we pick the progress, it's not being actually used. Right. So we have to go back. And in the data table view, we have to basically when we render the cell here, we will have to use the renderer. And for that part. Let's look a little bit here. Okay, so just to make things simple, because otherwise we will have a lot of logic inside here. We will add a new component. That's just gonna render a cell. And import config from long vote or registry. And quickly go like that. So this cell will receive the column, the column definition, and the value that corresponds to that column coming from the file data. And we're going to look up. So this renderer is going to be the name of the view template, the name of the cell render extension. And it's going to look up inside the vote or config. And in the cell renderers, it's going to be the name, and it's going to use the view component so back into into the index. Save this so I can. I can close it so back. I'm in the index file of my add on. I should put the code side by side. And here like so. Okay. So config blocks blocks config to this one. And then we have view. Right. So that would be cell renderers renderer would be one of these names like text template. And then we will have a key like view and the component. So now this one, you see, it's, it's a constant, it has to have an uppercase name because it's a dynamic, let's say look up looked up component. And now when we use it here is just going to be this component here. So we will, we will use it we pass what we pass we pass the column we pass the value. And if we don't have, because for example, yeah, we might, we might decide for example that we, we only care about the schema extender for one of these renderers. It can, it doesn't have to have a particular view template. Okay. For that case, we will just say we have the default, the default fallback past here. Anyway, that's, that's just, you know, when you start developing this, you arrive at these just in, in a natural way. And then we have to use the cell. So, in the data table view. Let's see if we can quickly figure it out that would be cell column would be called column. Think. But who knows, and the value would be. I think it's. Oh, call. I would have to close the tag. And let me just quickly double check with the tutorial. Okay. So, I will just copy this not to run into our troubles. And delete this one. Basically, so I got the value of almost right because there is an extra case where if we don't have a column defined we want to fall back. And I didn't get, I didn't get the column correctly because it's just called. Okay. Okay, let's see what we have. Okay, so the progress templates is now used and if we would pick one appropriate column but one that's appropriate for progress we can see that it is. It's done like like so. And, for example, it might be an exercise if you want to pick it up. That I like, like there's a ton of use cases but imagine that I might want to change the color for this percentage. So right now I can go and hard code it. One second. Okay. So here I might say color is red like like this. You see it changed but maybe I want when when I'm in the format for progress maybe I want to have a widget that allows me to pick a color. Or maybe I can even go further and say I want to have dynamic colors I want to I want to say okay if it's between zero and 25% then I have this this color if it's above 25% let's say between 25 and 50. I have yellow if before I had read who knows where let's say we're tracking COVID vaccination rates. And there is one add on the Volto objects, Volto object widget. And this add on actually had the code for the object least that we have used it was this flat list object. In the meantime, we moved it to Volto so don't go to this add on just for this widget Volto already has it. But there is another widget this one the mapping widget and that could be used for that use case like we could we could define. And it would be a dictionary we would have ranges we would have zero 25, and I would be able to pick a color and 2550 pick another color and so on. So, let's see another, another widget. Yeah, this one. I think that's it I think we've stopped but we have, I think we have a lot of other would we just maybe will put them in the add on. I want to draw attention to two repositories. At least. Yeah, let's, let's, let's go. First is the collective awesome Volto. And this is. This is a collection of many, many add ons. And if you're developing add ons, please put them here you don't have to do a pull request just, you don't have to do anything just edit this file it's in the collective. So, just add your add on here. It's better to it's better to have it here listed then not not to have it and even if it's messy afterwards we can clean it up. It also has a list of companies that that do work with Volto. So, please add yourself here if you want if you're starting to work with Volto. Maybe you know maybe you don't and this is more Victor's area but I'm just going to mention it because it came to my mind. We have, we have, do we still do the early adopters meeting. We didn't have it. Or maybe I kept, I lost track of it. But we have a monthly early adopters meeting. Yeah, Victor. We, for reasons that yeah, out of our control I guess that we didn't have it in the last two months, but nothing that we can resume to doing that. Soon, right. Also, the Volto team committee is open to anyone. The regular one it happens to be weekly. No, by monthly. Exactly. Sorry. So yeah, anybody can join and this meeting as well. So, yeah, in the last two months we've been kind of swapping using the regular Volto team meeting for supply the lack of the real doctors. I guess that we'll continue to help those meetings. Yeah, quite soon. So yeah. So, Volto is quite quite an open community and and quite embracing. And don't, don't, don't be surprised if you're going to find the hard fans inside it inside that community. But I guess now with with the final blown, I mean, not the final but the alpha plan six release we're going to see a lot more new people coming to auto, at least out of curiosity, maybe. Okay, so, um, yeah, the other the other repository, actually. It should be, let's see if we get it from recently updated. It should be the Volto data table tutorial, and that is the repository with the code that that is being updated that is being developed in this training. And it has a bunch of branches. They are incremental branches so you can start from bootstrap and 01 01 basics and then 01 basics part two. But basically, they are listed here in the inverse chronological order. So, kind of like the training 2020 that's the old version of the code. Then we have starting with zero zero we have a new version. Maybe I will make an effort and sort them reversed just to make it more clear what the order is. And that code has everything so the final stage. Here should be should have everything we have in done in this tutorial in this training. So, Volto data, data table tutorial, I think it's also linked from the training text. Okay, so finally, I think we are done. Okay, yeah, look it's it's listed here. To be at six o'clock, maybe we should take 10 minutes break since you finished and resume and come up with other topics. So, let's take a 10 minutes break. I can rest my throat and my voice in the meantime and then let's, let's have a question and answer section. So, we have time and let's see if we can start maybe even start doing some hands on development with Volto me and Victor we are here, you have our time we are willing to help you and to answer to any question that you have. So, see you in 10 minutes. We need at least one confirmation, but we can resume. I am back but let's hear another. So, we can do, we can do a question and answer section. If somebody has any question. And if not, but I hope, I hope you do have questions and things that you'd like to hear more about. My brother is suggesting that we might, or at least he would like to see how exactly the block extensions or the block variations mechanism is used and is implemented. So, I could do that if as an alternative. So, that would be interesting to show how the text online can be used actually on a dexterity field for you to go and modify in your config for a field name to use your widget, like a remote URL which we know is used by link. I'm not gonna show that because I would have to, you know, I don't have a dexterity schema readily available, but I can, I can point you to, to a documentation. So basically, I think it's in recipes or where might be development recipes for widgets. And Katya added this recent great addition, which is, you can have for example for this widget for this field right. You can have it blown out of form directive. So, and you can say that for the special field right here, you're gonna pass down some front end options, and those options will be, for example, the widget. And in the widget lookup in Volto. So if we go into components manage widgets field. Now, I did the same mistake last time as well. Field is in the side form. So here you see get, get widget from tag value. So widget options front end options widget, right. So widget options would be a special object, but you'll have front end options widget. So it's going to be this name. So basically, here, that's how you, that's how you attach a particular widget to the dexterity field, if you want. You could, you could attach it by ID name. I know, but you don't want to do that all the time. Okay, but just to show that your widget can be used even on a non block enabled content type that is dexterity and that it's. You can pass it and I'm not talking right now I'm not talking about anything regarding blocks. So, in your dexterity schema, right, so this is this is blown back and code. This is schema dexterity schema right. So if you add this directive in your schema for one of your fields, the schema that comes rendered and I will show you that in the rendered, I mean in the schema that comes for the document type, we already have that use case. So this one, you see types document. So this is the cloned rest API schema for the dexterity content time content type called document in the properties in the subjects. So we have already widget options. So this, this little key here, widget options that, for example, I could also probably, probably you've heard this one as well that was a crazy motorcycle outside. I can, I can, I'm sorry, I can copy this one and I can, I can put it here if it helps. So, you know, widgets, options and so on. It can, it can come here but it already comes with cloned rest API schemas. So, and, and that is to support dexterity fields that wants to hint to what's the widget to what widget they want. Right. So if you do that, so if you do this part in your dexterity schema, then the schema that comes for your dexterity content the one that drives the one, hold on a second because my. Okay, the one that drives the main table and then the main idea. So that widget will have that field will have the widget that's written in Volto, and that is hinted by the dexterity by this directive here. Is that clear. I guess you have to, you have to, you have to try it, but I can show you code. So another option or at least a precursor of this because I don't know, but let's say, I don't know if Katya added something special to to have this available or if this was already available, but we didn't know. A precursor of this is for example, and I will. I think are you talking about the new hints or I'm talking about the Jason Jason field. Oh yeah. Yeah. So, let's see. No, I'm this property I shouldn't look into. Okay. Jason field. Yeah. So for example here temporal coverage, right. We have declared it here as a Jason field. And the Jason field field supports this argument the widget and that that widget is actually the widget from Volto. And in the demo freshwater website. If I go and add for example of visualization. No not visualization. What was in my head and indicator for example. The, the geographic geographic coverage the temporal coverage for example. And that's maybe okay let's let's go for geographic coverage you see it's a more complex widget. So I can pick, I don't know if it can pick a continent like this. It's broken of course, but the widget will be the one that is determined from here but that is for Jason field. You don't have to use Jason field of course if you have some complex information that's really easy to attach from a widget as a Jason. You can use Jason field but if you have a simple value like like in Katia's example here that's just text line. And you want to have a special widget you can do it like so with front end options like this. You have to you have to you have to write code and you'll get it. Also there is the new tax I mean the use tagging fields. And it's supported in the rate of the self I can taste how to do that in the chat. Yeah, the audio is kind of bad for me because I didn't quite get into which new widget. Let's check out the chat. Get with my tag value. Yeah, well, this, I think this is what I'm showing here. It's this piece of information, because it's it's front end options and it's a widget special options like so but I can open the conference and open this for the quest. And I think I thought that there was no documentation about it yet. Great. Come on Katia is member of documentation team right. Yep. Some other ideas things you'd like to know. I encourage people nobody is biting there are no bad questions is you've seen I've asked and even though I might have been wrong or TV sets something else of what I recommended it's still free knowledge to be gained. Okay, anyway, in case there are no question we can, we can think about extending our block and let's let's do a live programming session. Let's enhance the block. I'm waiting, I'm waiting for somebody to tell me otherwise. I'm waiting for someone to say no but I still have more questions or no but I want to do a hands on session by myself or anything. Maybe. Is there any section of the tutorial that you still want to bear you to go back to and further explain. If you don't think of something to extend right now. Maybe there are some unanswered questions. This is also an option. In any case, I think, yeah, I think we can. So, let's think of an extension mechanism. We can say, for example, and that would be the simplest case. Right now we are rendering as a table. We might want to render it we we build the extension mechanism for self rendering and so on so we will keep the table. But as a simple simple simple example. We can, we can make the table black by default so instead of having to pick this option. One really simple way that the block variation mechanism could be to inject some some props. David, do you have another idea on what we might I mean what variation of this block you might want to see. Considering what we have right now. I think maybe this is a question that the other participants can also chip in if they think that's something else might be worth adding as a variation. Okay, so let's let's go with my idea. If no ideas come. The one number one idea stands. So, we have this extension mechanism in both which is. If I, I want to show one instance of it. So for example, and I hope I have the latest version of that has it. Because there's a great example of a variation. And the one that is. No, I don't have it. Okay. Yeah, because I use a lot of 13 for this tutorial. Let's go to vote or that kid concept.com actually six. Yeah, just put the latest version alpha in your project and rebuild it. You can also show the people how they can upgrade. Okay, that's a great. Actually, yarn at long vote or at 14, and they have to add it in the workspace route. It's not, it's not published so I'll have to go on vote or like this. And I will have to also look up what's the latest release. So, thanks, Alpha 23. That's okay. So now it is going to use the GitHub. And I'm going to check out. Okay. And yeah, I'm not sure what the upgrade steps are because my project used to run on both 14. And I'm pretty sure I have the latest blown, because I made sure to run docker pool blown and I'm running Docker from a plan running from from Docker and I know from that that blown that rest has been released and the latest blown alpha has it. So you put a comment on the on the chat. So great. Yeah, sure. I will stop this one. Basically, you can go to dogs that will the CMS and then go to get getting started bootstrap photo. And here, you will have this command. And I'm going to put it in the chat. So it's this one. And I'm also going to add a link to this one. Okay. So, yeah, based, based also the link, the command that you use to upgrade to the latest alpha. This one. Yes. No, I think, but I'm not 100% sure that my local database was because I was using Docker. So, the days that the Docker command will not persist your data or at least I don't know it was going to be in some volume or and I usually use the compose and create a file a configuration with a volume and so on. So, don't I mean this is just a temporary blown but we run for this tutorial. So it doesn't have my data table anymore but what I wanted to show you is the new search block. And when you search block has or has not should be here. I'm not sure why it's not the check. Yeah, check the Volto version from control panel to see if you really are using now it's still using a lot of 14, which is strange. So let's look in the dependencies. It's like this. This is yarn. And the. Yeah. Now it's the time when you have to run yard for some. Let's see. Yeah, and then look at on the let package that Jason. Okay, now it's fine. So basically I had to, I have after I have it is weird and it's going to be on video. So after I have added when you Volto 13 version right. It looked like it installed to everything. And then I didn't have a photo update version. But now it should have it. Yeah. Who is right mind does this. Do I mean, I, I click on click here and it starts to move the window. Anyway, search. Okay, finally we have a search block. Okay, so search block. Just to give you a quick, quick overview of it. Move the window a little bit to the left. Okay. Okay. You can eat. It's basically you can have like so you it's an it's a block that you can use to search content. And you can add criteria base criteria for example let's say but the type should be page of which we only have one. And we can add a facet or filter let's say and we can put for example a review state and we can we can have it run as a check box and so on. And we can and this one is also using the object list widget, which is really nice. And for example show some sorting controls and we can say okay I want to sort by I want to make available the following sort options. So for example sort by effective date, then sort by creation date and then I will have these two options here. And but the idea is because here is where I wanted to get. We have the following extension mechanism we have this variation and we have the results template, the variation is the one that's for the search block itself. And we have facets on top as a variation so basically it puts everything on top of the search results facets on the left side. It moved them to the left. And to the right side which is the default. And then more we have another variation or accessibility inside this block, which is how the results should look like and for this one because this listing reuse is the block listing template. And these ones are the ones that are available for the block listing so we have summary image gallery with that doesn't do anything in this case and the default template. And so basically variation here is the one that's that's used and I mean but we are interested in and that variation. Let's look at how it's implemented in both. We can check the other config blocks. And then we're interested in search block something. Yeah. Okay. So, in the block configuration, we have to, we have to add a new list called variations. We have to this list needs to define objects with the variation configuration. One of them, one of these objects should have this is default to past to past as an as an option. And that means that when I create when I create the block is the one that's going to be used by default. So here, we don't have it showing up here although it's a problem we're working on. And that's going to be used because if I, if I add the facet would be on the right because the right side facet is the default. Okay, so let's copy this and put it in our in our thing. Let's simplify a little bit this what we have here. Okay. So, needs to be a list like so. And I'll just comment this for now. Let's add the default table variation, whatever, and we say default table. And let's add a new variation. And we'll, we'll name it red table, I guess, because that's something that's easy to implement. And of course, it's not going to be my default. Okay, so now we need the components. And the idea is like this. So first of all, in our data table, a date. This one, it has to with block variations, I think. But I need to check. Or no, I think, yeah, we are actually good to go. So block data form should automatically gives a give us the variations if we go back to and add the data table page. Here. And yeah, I'll just save and add the file. And edit this and then pick that file. Okay. So back, I already have the options, right, default table red table, they don't do anything right now. So we have to use them. And it was really simple just just add the variations inside here. And we declared the variations and that's it. And we already have a drop down. We need to, we need to use those variations and to use them, we need to go to the data table view. And basically, we have to, we have to refactor this and make it something that we can reuse. And I will say here is the renderer, let's say, and I will create default table view component. I don't know what it needs to look like. Oops. It should be here. I will return this. Okay. Yeah, renderer. And I will say const renderer. equal default table view. And what we need in there we need, we need data columns file data, like so. So now, in the renderer, we need file, we need columns. So data equal data columns equal columns and file data equals file data, like so. Okay, okay. So with this, we are not actually plugging into, into the block extensions mechanism, but the variation we're just we've just affected the code so that we, we have this default table view as a separate component that we continue to use. So our code here should continue to work. Now, now I have this default table view as a separate component. So let's go here and here in the view, I can, I can now say default table view, and I can export this one. So that I can, and I'm going to save here so that I can import it. So I can import it from here default table view. So let's, let's, let's, let's, let's know I'm I'll just just keep it simple, simple, simple. Import this one from data table folder. That table edit, just to keep things simple. So now I have the view. And I can create another one you write, you know, table view, right. I, I don't have it yet, but I need to import it. But they will view so what, what would be red table view, let's say, red table view is, it's a component right so it's going to be like so and we're going to reuse the component from above, just to keep things simple. And then we can, I can, I named it, I didn't name it props and I have to export it like so. Okay, red table view here, red table. Okay, so nothing should happen. No, no chance. No, no change so far. Okay, tables be imported from data table view data table aided. Hold on a second. So data table view is not from data table edit. Yeah, that's, that's, that should be in view view. Yeah. Okay, so now I can switch between them nothing changes, but we can, we can say that hey, maybe this component supports, or rather, let's, let's do it like this div style is background color, red. And we will, we will close the div. So, McCold looks like this we have just wrapped it into into another div with the background color. So, if I go here. The problem is that the table has white background. So I need to. Yeah, this one. See, it has white background and then so on and so on. So, maybe we'll just write, but we, yeah, it's not actually the problem. The fact, but the fact is we are not actually using this, this renderer because we are hard coding it here. Now, how do we get to the proper renderer. Right. We this one. And I'm also going to just add a border. Just to see it. Okay. So the idea is that it is rendered. It is saved in the data. But here in the tape in the component view we have to use the variation we have to, you know, to instead of hard coding the component that's used we have to use the one that's provided by by per variation. And if we go into the documentation into the extensions. And it says here that we should use with block extensions. Let me quickly check in in the search block, where the import comes. And that's because I'm not using a fancy, fancy integrated development environment. But yeah. Okay, so this is a hook with block extensions, we need to basically wrap data table view with it. We can do the same thing with compose again, compose and with block extensions. And we need compose. Redux. Okay. Hold on. Sorry. Okay, let's see variation on that table. Yeah, I still didn't use it, actually. So this, this hog will inject some new props into the component. So let's identify those props. I'm gonna go to components here and I'm gonna inspect this data table view, data table view here. And then it should have you see variation and the default table. And if I change that it should change the variation. Let's so we're looking for a prop named variation right. So you have the example in the documentation you can just copy and paste. Oh, no, let me figure out, figure out things on my own console log variation. So variation is default in our case. Does it change. Red table no that's because we, I think we made a mistake here. Yeah, and we didn't change. It should be red table like so I didn't change the idea where were duplicate duplicate ideas. So back to our thing. A red table you see now it's a table. And back again into our thing. In the data table view. Now that I have a variation, the variation gives me the view. So it's, it's this one. So instead of saying render it is default table view, I can say variation view or just have that as a fallback. So now, if I try again. Hold on a second variation on defined. So let me, I'm not sure what happened, but we'll figure it out along the way. Red table view. Okay, good. So it needed probably there's a moment when variation is not identified the component is rendered anyway so we have to have a fallback case. So yeah, that's, that's how you implement variations in your and and and so on. And of course, here. So in red table, we could have a schema enhancer defined and this one will be able to edit the schema for the block. So basically here somewhere it should add something new. Right. So let's say that this, this block. For some reason we want to show a headline in it, right. So we're going to add the headline field with it. So this one should be a function that gets form that gets schema. Sorry, schema, for data and the Intel object. And it needs to return schema. Okay. Sorry. Like, like so. Okay, just quickly check that nothing breaks when I pick the variation. But for example, just to make sure that we execute this, I can, I can just add some dogging. For example, the schema. And you can see the schema. Okay, so now what do we, what do we want? We want to add a new property and we want to write inside the field set. We want to add a new field. In some cases I've started to create a new field set when I have a new variation. So let's, let's do that. Let's create a new field set. So I will return the schema. I will return an object that inherits from the schema. The properties will inherit the schema properties, but we'll have a new field where we'll name that headline, which will be a field with title headline. And it's going to be a widget is a string by default, but we can, for example, add a description. Write your headline, and I'm not doing international internationalization. But if I would want to do that, I have access to the Intel object here. Right. Okay, so now with this one I need to also write into the field sets. And the field sets is an array so I can do a field sets. And it's an array that inherits the schema field sets. And I'm right. I'm adding a new field set. A field set is, is defined as a, as an object with an ID, let's say, a red table options. And with a property called fields. And that, that is the names of the fields that we want to show in this field set. And that is the headline. And we also need a title. And this should be red table options like so. Okay. So, should be fine now. If I go and edit this, I have red table options and I have a headline. And I can write here. It's a red table. Like so. And if I save it, come on. I'll have to redo that variation red and you can see that the schema enhancer is executed all the time because basically the prop of the props of the block get that get changed because that the block that has changed. So everything gets re rendered, re executed. That's not a problem. Okay. So now, now I have saved. Okay, so now we have a new headline field that we want to pick up from here. So we can say props. Actually data headline. And we, we put each two props data headline. And if it doesn't exist, it's just an empty string. So if I'm gonna, I have to. Yeah. I needed to reload. Okay, so this is how you can use the schema enhancer for the block to quickly add new things. That wasn't hard, right? I don't know. You tell me. Yeah. And this. Sorry. No, I wanted to say that the same thing can be done from another add on. You don't have to buy everything here. Yeah, that's, that's, that's the whole idea that the add ons have those configuration loaders, they can, they can they have the option to mutate that config. So, so your add on can basically add new new options, new variations to a block. But the idea is that your new add on should depend should load first this is the full data table tutorial. So if I go back here, and I will do your clone both of add on. And I will say, long collective, both of second, both of second is fine, both of second, I don't know, okay, whatever. I'm going to add it to be loaded. Both all second. Now, in the second add on, I will, I will say probably that because we want to add new options for the Volta data table and the Volta data table, it needs to depend on Volta data table tutorial add on. So I can, I can actually remove this one from from the add ons list here. And I can, I can go into the add ons into the second add on. And I can, and I've mentioned this in the tutorial, but you can create a new add ons key. And you can add. Hold on a second, just to get it from here. So I can say, okay, my add on depends on this other add on, and I need to go into GS config. And I need to add, when you add on. So it should be like so. Both all second add on, and it needs to be in there. Okay. So, so let's go to be in a package that Jason add on should be both or second add on, you're missing the slash add on. Yeah. Okay, eagle eye. Okay, so now in package that Jason know this one here. Let's, let's, let's, let's, let's, let's, let's, let's, let's, let's move. Hold on a second. I'm editing in too many places. Okay, now back back in my. Okay, I'll close this one and open just source index so I'm in my vote of that table tutorial but the add on that we've worked on since the beginning. I'm going to remove the red table variation. I'll just, I'll just cut it. And I will go into the little second add on. And I will add it here. Okay, so this one should go into config blocks blocks config data table variations. And I will say just push, because variations is an array. So we're just going to push to it and yeah, we will have to move the table view. Yeah, let's, let's go back. So what about the table tutorial, go into data table, go into data table view, I have to take this component and I have to move it to be other. I'm just going to place it here just to keep things simple. And of course, I will need to import react, because I have just six code here. Okay, and then I will need to import the data table view. So what I can do like import that table view from blown collective, both of that table tutorial, then it's going to be data, data table, data table view, like so, it's also exported actually I think from data table, like so, I might import it like that. Now, we can start to Volto. And fingers crossed, but this thing will continue to work. So it didn't like something for sure. It appears to be spinning. Okay, so when that happens, it is possible that we are somehow causing an infinite loop. So, in case that this happens, it's quite tricky to to debug and resolve but let's just make sure that I didn't. I didn't screw up something so I will. Oh yeah, right. That's, of course. So what I did was data table view. So I should have imported default data table, not that a table view I'm trying to to render the block and inside it I'm trying to, you know, to have yet another instance of that component and so on and so on and that causes just an infinite loop and my browser just crashed. I mean, it didn't crash but it should have crashed instead of just spinning on and on. So, the solution is here in data table view and here I should have default table view and I don't know why. Like so and I don't know why. Here I have that the table view, because there it should be. It should be default table view. default table view. Like so. Okay. So, I go back and with some courage and hope it doesn't crash and it appears that it's still working. I mean, it's back to working state. And with all the options and just to confirm that this is actually the code that gets executed. I'm just gonna make this quickly. And say, return and just add it console log just as a confirmation that this code gets executed. And in console, let's quickly reload. I'm not sure why it crashed, but yeah. Yeah, this one. I'm looking at, yeah, I don't want to see those messages. Okay, so what a table view all day. What else can we do? What else do you want to know? You know what would be useful to show that you've added dependency development dependency to add. And with some performance issues to find out why did it render. So, disclosure here, I've never actually used why did it render. Maybe I've used it. What's the package itself you added a package. I added the use trace update but that's yes trace updates. Yeah. For example, if you want, if you want. Let me think. Because it's going to be. That's, that's for cases where you have complex complex state and I can show you in if I have a development copy of Volto. So I will check master for example, here. You can always count on Victor on making frequent releases. And so now I have started development version of Volto. Because in the in the search block, it is the kind of complex code that that that that makes it worth using the use. Use a sub date. Use trace updates. Yeah, thank you. Manage blocks. Search. So here in with search. I mean, I need to make this one a little bit like this because it's quite complex. Okay, so I'm just to explain a little bit the structure of the code. I have opened here on my left side search block edit and search block edit. It composes with search the whole. So, and the search block edit you you can see that it's pretty much the same that we did. And so, all this we training. It is block components it is it is block data form and it's got schemas and it's got the Hawks to push complicated logic into reusable things and just to keep it aside. And the magic, let's say of the search block is split or isolated in this with search function and for example the search book view is almost nothing I mean you can you can see that there is not a lot of logic inside the search block view. It relies on with search and with search will will inject search data inside the search block. And basically we will rely further on the listing body to create the actual listing of items. Our job as the search block is just to filter content and provide the facets and provide the input filter but it is the job of the listing body to actually use those criteria use those facets and filters and and compose a query that is passed to blown and from that query we get the results. So, in this, in this Hawk, right. For example, one complex object is the location search data. Let's see. Yeah. So, okay. Let's go to import. And I hope I hope it's like that. But I don't remember how to use it. So, we'll discover. Let's see. So, before 10 years I put my browser on the second screen. It's not, it's not 10 or more years I don't know it's not a habit I can escape quite easily. So let's search block here. And I'll inspect, because maybe it will show us something. Okay, so it says, um, yeah, okay so we don't actually need to do the console logging. I'll just break breaks. Okay. So, I said no need for console logging. So just, just like so, just recently. Okay, so, the idea is, sometimes, for example, you get updated props and or your you have a dependency for example, on, on, on something like reactive is effect or reactive is called call back. You see, for example, let's, let's search for location search data. So, let's see. No, but the location search data will URL query will be something that is generated from it and then URL search text. Yeah. It's, it doesn't. Yeah, it doesn't matter. I wanted you to show that we have this tool in blown and if you have some issues you can make use of it and see. Yeah. Yeah, but you, you have to have a, an actual let's say that you're trying to debug. So for example, you, you may have, let's see use effect. Do we have use effect? No. For example, let's go in in here. Yeah, maybe, maybe this one will be better. So let's, let's use it for the query. Let's say and we'll get rid of here. So I'm moving it to the search block just because it's a simpler use case. And we're going to use it with query and that query will be like this. All right. Okay. In, in, in a, no, in the edit block is going to be hard coded. So that it doesn't trigger. So we'll put it back here and we'll look at it. Location search. Here. Okay. So, there it is. When this, this object changes, you will get, you will get messages like this. Right. And the, the, the information that you will get here is about exactly what changed inside the diff, let's say. So for example, if I, if I had a new fast set type, but I think this one is, this one is not a good one. Okay. So, you see now it says, okay, the props has changed and you can, you can check the props that has changed. And that makes it easier to debug use effects and complex things because you might get tricky interactions and for example, one, one thing that we have to realize in, in this volatile thing in this react, we have potentially many I think operations running at the same time. So, one of them finishes early and the other one, or some others are still waiting to finish. And our state, our components should be ready to handle any, any situation with half of the data only there or, you know, things like that. So, this tool can help you in those cases. And I know that you're very, very particular about re rendering and rendering optimizations are pretty sure that it will help you in particular, particular, but the other one, let me see react radar, I think. I've used the why did you render. Yeah. I don't know I had, let me think I think I think I have it here. Okay, yeah, it's this one will log radar. So, yeah, it's this one. You see, and that one helps you understand the, the, how the repaint of the browser and how, how performant your application is. So, for example, if I go into something like biodiversity, which has quite complex pages. Let's go like this, right. And I'll trigger the log radar. This one is pretty, pretty good, I would say I'm very happy about it but I'm pretty sure that that the page like demo freshwater website. Let me log in. And go to a page like this and go to the edit and start the log radar. That's fine. Yeah, but you can see, there's a few, a few instances where, and actually I can feel them on my, on my scrolling because they've just stopped. The browser was stopped and refreshing and repeating. So this tool helps you optimize your, your application. And there's actually one aspect of Volta that I'd really love to be able to fix at some point, which is the fact that we are not rendering our blocks in a virtual list and that that means that this scroll bar, I go, if I go to the bottom, it was, it will really be the scroll bar. And if we have a huge number of blocks and huge starts around 300 400 it depends a lot on the, on the type of content content that you have. So, really, really slow down and the slow down comes from the drago ball live with the drag and drop library that we use that one in there we have to go in there and virtualize it. Well, basically integrate a virtual list component in it. Who's up for the task. That's, that's a, that's a question. Anyway, anything else. Maybe you can also look at the chapter eight from the way you had some questions and answers to see if it's worth sharing them. Yeah, well, so advanced topics. Oh yeah, there's some interesting stuff. So, is it possible to customize Volta with an add on. Yeah, sure. It's, it's like, let me let me let me know. So, come on, come on this one and start. So, basically, if I have this add on, I would need to add customization customizations Volta. And in some documentation you will find that you just have to add customizations and then, I don't know, for example, like components theme logo.js logo, logo.js. I don't know, something like that could be. I would have to match the source path for the for the Volta component. But once we've introduced the add ons, we have to do the path like Volta here and the idea is that just refreshed. And it is possible to actually customize any file from any add ons. So, for example, I can add Volta second add on but of course, it doesn't make sense in this case but I could add the, the name of an add on I think it needs to be with, with the name space like so on collective like that. So I could have something else there. But of course, it makes sense to do it. For example, if I, if I'm customizing an add on, it makes sense to do it in the second I don't so I would have customizations. So collective Volta second add on and then so on and so on and then for example, that table and the problem with customizations is of course that you, you're basically rewriting a file. So now you have to, you have to maintain that file you have to update it whenever we add on or Volta refreshed that file. So, yeah, it's up to you if you use this mechanism it's quite easy. It's quite fast and powerful because you're basically have total control you don't you don't depend on an API and extension API being defined. But I'm not sure how good practice it is. Okay, so can I have an other theme in an add on. Yes, I have shown, for example, the all our ea, and on all our ea theme themes are implemented as add on and that's just by doing the trick with the rather like stand which allows us to redefine and a webpack alias. And once we have once we can redefine that alias we're able to define the the the location of the theme. Can I extend I mean I'm going to just skip over this one. Can I extend Volta's Express server. I have discussed this already a little bit. It is, it is one, it is one of the Volta's hidden powers. But it is one I'm very, very close to and something that I really appreciate having being able to do that. Now, we get to one interesting thing which is the bundle optimization that's that's that's really nice. Okay, so let's let me quickly give you an overview how to optimize the bundles. So, first of all, you have to know what your bundles are in you have to be able to debug the bundles and so on. And then you can start optimizing how that those bundles look like. And you can do that by running game bundle analyze is true. And I'm using fish as a for shell so I have to add this and as a prefix in bash you don't need it, although I think it still works even with them. Now what will happen. And I need a little bit of water. What will happen is that you can see it. No, I shouldn't do that. I should do yard build. So, you want to analyze the production model not by development model with this flag a new webpack plugin will be activated. And that one will will basically instruct we're parked to dump information about the bundles. And then it will start an HTTP server. And that will allow us to look at the bundles. And because, um, yeah, it's it's a more lengthy process, because of course, the bundle has to be produced in production mode. And then the debug information has to be dumped. And then the HTTP server has to start. So now that the HTTP server is started. We can load and check. Okay, so, um, let's first. Let's start. Let's just to understand what we're looking at. So I have started both in production mode. And I can go to local host 3000. And right now it's running in production mode. So if I look at the network, I'm going to put it on JS and I'm going to force refresh. So I have four files for JavaScript files that are loaded. One of them is big 2.5 megabytes. Okay, it's not, it's not, it's not packed. It's not increased. It's not jazzy. So it is big, because it's coming from the local host. Usually you would serve it packed. Okay, so we have, we have this big file. And we have 9.4 and we have another file. And it's called client and we have what else. So these are kind of like the big, big files. So we have client and 9.48. And if we look at our bundles, so we see this one, sorry, we see this one 9.48 and this is the content inside that bundle so basically it, it, it has, for example, this, this, we can see the actual beautiful log and drop. It has a footprint of 235 kilobytes. Pretty big. draft.js yet another 360 kilobytes moment immutable and so on and so on. Of course, this one, this is, this is our local Volta components. We cannot get rid of those ones. We cannot get rid of React. We might be able to get rid of these things. We might be able to get rid of draft. I mean, we shouldn't be loaded like that. Load-debateable. So I don't know what this is from who uses it, but we also have pop-up bars here. Now, this one is not a big, bigger library, but it is ours. It's our responsibility. So we might try to optimize it. Okay. There are more bundles here, but they are not actually loaded. So for example, if I, let's try to get them to load. Some of them, for example, React select. You see, there's some bundles loaded. New bundles, meaning new JS files that contain some JavaScript modules. And for example, I'm wondering where the select, React select is loaded because it usually sits in a separate bundle. But I think, I think we have like a sort of meta bundle that loads all of them. So, yeah, I don't know. Let's see. So now if I go to page. I think, yeah, but this size is misleading because that size is wrong. Okay, now we get the real sizes. Okay, so for example, this one is at 124 kilobytes. And what, what it is, I don't know, it's something with dates time. But anyway, these are files that were loaded on demand whenever we have components inside Volto that need them. So, to be able to instruct a vote on to be able to instruct web pack to separate something to be loaded on demand and we see that, for example, with pop up ours, we were not able to do that because it's loaded in the main bundle but one that's loaded by default for everybody, including, including anonymous users and including pages that don't contain our data table block. So, one, I mean, what's at the root of this is a library called loadable components. And actually at the root of that library is the web pack support for lazy loading. So, what we, the Volto comes with some helpers with some lazy loading helpers. They are documented in I think development recipes, lazy loading and code splitting. And this is the basic method to lazy load. So, let's, let's try to do that right. So, I think the time is short. Let's keep it simple. I'm going to use the Volto Volto recommended way of lazy loading, because I mean that's what we're trying to teach. So, we have to, we have to find the way the place where we actually use pop up ours. And then we go back to that table, we are probably using it in with file data in the hog, I think. Okay. So, this one. Now, this will be, this will be interesting because, yeah, it's a, it's a hawk. And basically, in here we are actually needing it. And I don't think I ever ever done this but let's give it a try. So, we need inject lazy lips. Okay. And inject lazy lips is is a hawk on its own. And inject lazy lips, it needs to be called with with a component. So, we have inject lazy pop up ours. And it's going to be this one, the result of this will be the hawk that we need to wrap our component. So, let's assume, just because I need to be to figure out how to write this. So, I have a component that's like wrapped component. And I want to, I want to pass wrap component like so. Yeah, well, let's, I don't know. Okay, I'll continue. I don't think I will succeed. David is telling me that at some point, my past self already figured out. So, we use late. I'm not sure. Higher danger lazy. And like so, yeah, but it's that one is not a component. And I was, I was trying to figure it out how to write it, because we'd file data in itself is a hawk. It's not a component. It is a function that is applied to component. And so it's going to be something like I expect. So basically we have to stack all of these. So the wrapped component is going to be this one. So now we have yet another. So we take, we take this result. Right. So I'm going to have a wrapped component here. Okay. So we will have to take this result and pipe it to with file data as a component. So that will be the component and basically I should stack. I'm here compose and I need file data. Like, I think this one needs to go here. Like so. And I need compose. And honestly, I would be amazed if this works. I don't expect. In any case, it needs to be declared as a configuration. And I'm going to explain why and Papa parts no longer is here import CSV from Papa parts, but it will be a prop inside here. And it should be Papa parts like so. Okay. And I should bars. And I should have it here. And just to make sure that I will, I will console log it here because I don't know. And now the idea is that Webpack needs to see static imports. All Webpack imports have a need to be more or less static. There is dynamic imports, but, but they have to have hints about the name for the import. So we cannot actually invent a new type of module that Webpack doesn't know. So how does Webpack know about that module. Basically in our configuration. Okay, we have to say config. I think lazy something but I will check the documentation. Setting so. Okay. So it's going to be config settings and then we're going to say we need to import the parses and we need to import loadable like so. Okay. And we need to import here. Papa parts. Now, the idea is that none of this is important. What, what Webpack needs to see is this part. This this import because Webpack will statically analyze our files and that's how we will figure it out. And if you don't believe me because I spent some time researching this. But, you know, let's see where documentation. It has this link here that says it is not possible to use a fully dynamic import statement. And you notice that there's no, this is not a string. It doesn't have quotes. It's a variable. What you can do is basically kind of give some hints about what, what, what the name of that file would be based on the location. And what Webpack will do is to go into that folder, more or less just bundle everything that that's named JSON from that folder. Okay. So, back in our code, let's see. Let's see what happens. And if this thing works, I will add it to the documentation. If not, well, who knows. Okay, now we need to start. Where am I? Plonkconf and set. Do I have it open somewhere else? I'll just kill it. Oh, I started, I think I've started it and development. Anyway, I'll just kill these two. So I can continue. I think you should make a new branch like demos or something and put the code that you've made in this video in this last section or so and get help so people can look. Especially the variation stuff is all valid that people can check and hopefully even the preloading. So it crashed. Let's see why you want it leave or bundling poppers. So we are we're injecting it like this. I saw it was wrapped in an array. But it might be that I didn't declare it well here. Loadable popup parse. No, but here it's different parse. It was, it was written wrong. Okay, so let's try again. Okay, popup parse is undefined. I didn't work and we have to figure it out. I don't know, I will, I will, I will try to understand how to write this because we have kind of a not an age case but a little bit weird situation situation where we want to inject this one into into our component. Actually, I think I might be able to do it like so. Let's hold on a second. So, I will say const. Lay with with lazy equals component and then just export default with file data. And then we use with lazy instead of wrapped. I have a question in the non lazy loading. It was import CSV from Papa parse but here you're using Papa parse dot parse. Was the CSV important or was that just a name given to the default. But actually, you're correct because it might be that this one has to be like this. I don't know where we, it's a little bit tricky sometimes you get the default export sometimes you get something else I don't know we will, we will, we will learn. We have, we still have a little bit of time, and we will just find out. I'm sorry but I have to. Yeah, it's just killing me. So default for us cannot read properties of undefined parse. It wasn't injected. Oh well, I'm not gonna waste time on this but if I come up with the solution I'll document it in the documentation. Okay. It was still a good demo. Just in case I need it for more demos. Any other things that we want to discuss. Otherwise, I don't know if let's discuss about the training. And let's wrap up. We still have a few minutes. What do you think. I haven't actually done any live trainings for the Plong Find Foundation. I mean, I've done the last year training and I've done this year's training and they were both online. And at this point I'm curious about what the difference would be to our real live demo training. I think it's, I expect that the experience will be a little bit different. From my side I think it was, I think it was good. I'm happy that we were able to have a more streamlined, let's say walkthrough of the training that, because this training will also be recorded. I mean, it is being recorded and it will be uploaded to YouTube and hopefully it will be useful to future Volta developers and users. So, yeah, that's, those are my ideas. And I'm curious to hear about your ideas, your experience, what did you learn, if you learned, and whatever. Just Okay, so we are a bunch of shy people, I mean, we are programmers, but we are shy, for sure. Now I think this year we have some new elements that were discussed in the previous year. The loadables, this is something that was not discussed, or at least tried, the variations was not there last year and you've done in this year. I don't know if you've, hold on a second, you're going to give a presentation with what is new in Volta since last year. Have you looked at the content? Because loadables and load variations is new stuff. Yeah. Do you have new things to loadables, new new added helpers and stuff like that. I don't know. Maybe the only thing that wasn't really said was about the locales, if it was at some point, you're going to have to do also some long characterization as a developer, but you have the documentation, it's not that difficult to do. I think the big missing piece in this training, but of course I think it would be a good idea to restructure it and let's hope we have the strength and time to do that for the next year. But there's a big missing piece and that is the testing, because we haven't actually done any testing for this add-on and it would be a good idea to have it. Unfortunately, we don't have it due to various kinds of constraints. Yes, I'm hoping the storybook for add-ons and Volta project will develop this year and maybe next year we can also present a storybook for this add-on alongside the just testing. Indeed, I don't know. Let's hope for a fruitful year, fruitful upcoming year for Volta. I think that's something that would be nice to have. Yeah, I don't know. Maybe don't be afraid to get on the chat, talk about missing documentation, maybe add some issues regarding missing info, contribute if you find the pattern, share. That's how most of the Volta documentation was done. Somebody found something and then contributed. So, yeah, everybody's open to contribute. Yeah, and now I have curiosity. How that lazy loading needs to be achieved here? For sure, it's a challenge. I need to do it. Yeah, I think we should wrap up. Victor, you want to say anything before we close? Well, thanks, Ibedio, again for the training and for taking care of it. It's been amazing as last year and this year. And yeah, only to say that people that you're going to also give a very nice talk these days and also in subjects that I don't know if we covered them all during the training, but also nice that we've been preparing this and pushing into the core last year. And, yeah, important like pluggables and things like that. And yeah, that, yeah. And pricing people to continue learning Volta, learning react to also create their own addons to put it in awesome Volta, put them into our own Volta, announce them. And yeah, that because we need to make from six. Yeah, the best CMS ever right then. And I think we are in a good, in a good situation to achieve great things and, and yeah, if anybody else can or want to help also in the core or submit some PRs, don't be shy. If anybody else want to attend the Volta meeting, you're all invited. It's not a closed meeting. You can do and say hi and take a yeah I mean experience what we are doing there and what we are deciding and talking about because every feedback is is important, or if you don't have you'll have them to attend the meetings, create issues, or even it's not for an issue but for getting feedback is also important. Yeah, and let's make this happen all together which is what what we are all expecting and thanks very much for attending. Thank you everybody for attending and hope to see you in person at the next plon conference and have an amazing conference experience and watch the talks and give talks if you have give lighting, lighting talks if you have, and I hope to see to hear from you and your experience if you want to write anything in the chat. Feel free. Bye everybody. Thank you. Bye bye. Oh, their voices. Yes, thank you. Have a nice night. Everyone. Thank you David. Bye.
|
Learn how to quickly develop a real-world Volto addon, learn how to structure your code to make it simple, reusable and provide extensible components.
|
10.5446/57497 (DOI)
|
So we immediately move to the next talk. It's the talk of Oliver Gressel. So you have a little bit time because I just want to mention, okay, also a little introduction. I see, oh yeah, we have to start from the bottom. So you wear it in tubing, make some masters in tubing, and then went to Potsdam to do the PhD. We're also some other positions in between University of London, I see. And so in the Nordic countries, Stockholm, Copenhagen. That's right. So and now you are at the Leibniz Institute for Astrophysics or your local people, so to say, and we are looking forward to see for a method to extract turbulent transport coefficients for what reason. However, I don't know, maybe also for astrophysics, maybe for others. Yeah, okay, please. Can you hear me okay? You should speak to the microphone. Others can hear, that's what I learn now. Okay, let me pay on. Alright, thanks for the introduction. My name is Oliver Gressel. I'm the head of the MHD Intervalence Section at the Leibniz Institute for Astrophysics here in Potsdam. So I returned after 10 years abroad and this is work explaining magnetic field in nearby spiral galaxies that we believe are caused by the turbulent motions in the interstellar medium, in fact, and by a dynamo process. And we describe this by means of an effective theory for unresolved scales. So this is work done with my colleague Detlef Elster. And I would like to actually make this in the memoriam of Karlheinz-Gridler, the founding head of Institute of the Leibniz, then called Astrophysics Institute Potsdam. Karlheinz-Gridler who passed away two years ago and who worked actively in this research area and he actually was one of the three people who invented this framework, invented the theory. This has proven extremely powerful since the 60s when this was first conceived. Okay, here we have a picture of a whirlpool galaxy. This is a nearby spiral galaxy. It's just below the big dipper in the sky. So you can't see it with the eye. This is a Hubble HST image. You can see spiral arms. Let's see if I can use a pointer here. It's an awkward projection here to use actually this pointer because I got a lot of reflections on the screen. Let's see if I can tweak this a little. Maybe I'll use this screen. Sorry for that. You can see galaxies, they rotate differentially. So the center, there's more mass. Everything has to rotate faster to actually not fall onto the center. So everything is constantly wound up and with it the plasma is stretched and wound up and in this process magnetic fields actually get amplified. You can look at the mathematical structure of the induction equation and you can see if you have a velocity field that has a shear in it or compression or then you can actually squeeze the magnetic field lines. You can stretch them out and with that by the motion of the plasma you actually amplified a magnetic field. So in the plasma there's a current. The current has a magnetic field. The magnetic field exerts a Lorentz force on the plasma so this is a very intricate mathematical dynamics and quite amazingly if you look at this in the synchrotron polarization you can see that the light that we receive in these millimeter radio bands that Luciano already showed for the M87 case is highly polarized and this means that there are magnetic fields, large-scale ordered magnetic fields, but the electrons, the relativistic electron spiral round and hence emit polarized emission that we collect with these radio telescopes. So this is a polarization image that's been done with a 100 meter Eiffel's Berg telescope in Bonn and if you look at this on first glance it might appear okay the magnetic field lines that just follow the spiral arms and so this just means the magnetic field is wound up by the motion of the plasma around the collected center but that's actually not correct if you if you look you will see that the pitch angle of the polarized the magnetic field direction will actually have an inclination towards the earthen-methyl direction. So there is actually something producing by an induction effect of radial magnetic field in scales of kilopassets and scales of the galaxy as a whole. So this galaxy is a plasma that is coherent enough to actually produce a large-scale magnetic field and this at first glance might appear surprising because the star formation, the supernova we're going off driving shocks through the supernova remnant shocks into the interstellar medium create a lot of turbulence. So the plasma is insanely high Reynolds numbers so there's essentially no dissipation whatsoever in terms of the microscopic dissipation. So you get turbulent cascade you get large eddies breaking down into small eddies dissipating at the same time producing induction effects in the magnetic field and if you do an order of magnitude calculation you will actually arrive at the conclusion that a turbulent plasma will not sustain any large-scale field because the field will decay away as the action of the turbulence so you will diffuse your coherent magnetic fields but just in timescales that are comparable to the rotation timescales. So you need something that actually overcomes this dissipative effect of the turbulence something that creates an order out of the chaos and it turns out what you actually need is two ingredients the first one is rotation making the plasma motions as whirl because you get Coriolis forces Coriolis accelerations that actually can sort of twist motions around additionally and the second thing is that this galaxy has a vertical structure so it's actually a stratified disc like the atmosphere in the earth so it's high density in the middle and then the density falls off towards the halo of the galaxy and that means that you actually have a preferred direction. So if you have a supernova remnant expanding it will swirl up but at the same time it will inflate a little more on the top side than it will inflate on the bottom side and this characteristic twist actually means that there's a net effect on the scale of the galaxy as a whole. So in principle you can put this galaxy on a computer and people have done this I would argue that it's very challenging to actually arrive at high enough resolution even the biggest supercomputer on this planet to really model the turbulence as a what's called a direct simulation where you resolve both the outer scale where the turbulence is driven and the dissipation scales or a proxy for the dissipation scales at least. So what we would like to do or what we've done is to actually focus on a small box that we put inside this entire galactic disc and then do highly resolved local simulations and then in some way measure infer probe the effect of the turbulence see how isotropic is a turbulence how homogeneous is a turbulence are there systematic deviations that actually lead to an induction effect as a whole and then encapsulate that into a effective theory a sub grid scale model and then run global simulations where we actually have a sub grid model for these rotational effects and so the framework for that is been invented by Steindegg, Krause-Riedler and the 60s so Karl-Heinz-Riedler was one of the minds that came up with this and you look at the induction equation so the induction equation this is a very beautiful mathematically topologically so you have a curl of a velocity field crossed with a magnetic field so you can separate this out by a vector identity one of the four terms vanishes because we know magnetic fields are divergence free first term means if you have a divergence of the velocity you can compress or decompress magnetic field displays a role by when you have interacting galaxies like you actually have with this M51 there's a nearby galaxy that's in the outskirts interacting you can have a B dot grad V that just means that you can as soon as you have a gradient of the velocity that you protect into the direction of the beefy you can stretch out the field and create amplify the field and the last term is a V dot grad so this is just an affection term so you actually the magnetic field lines are frozen into the plasma they are moving around now this mean field approach basically says we split our velocity field and our magnetic field into mean with an over bar and a fluctuation so that's just the rest when you subtract the mean from the total field and you can actually write down an induction equation for the mean magnetic field and you can see this is the curl of the electromotive force and the electromotive force is essentially this curl V cross B plus a dissipative term that I won't talk too much about it this is important as far as topologies are concerned but since we're in the limit of basically infinite magnetic Reynolds number typically this term gets ignored in the realm of the simplified approaches so the really exciting thing is that the mean field induction equation is exactly the same as induction equation but it has an additional term this epsilon with the over bar which is the turbulent electromotive force so this bodies all the correlations between the small scale velocities the fluctuating velocities in the small scale magnetic field so this is really the heart of the dynamo mechanism as soon as this is correlated with the mean magnetic field you will have an exponential amplification so on the left hand side you have the time change of the field and you can see this is related to the curl of this mean electromotive force so if this mean electromotive force is proportional to a magnetic field this gets an exponential amplification and that's what you actually need to sustain a large scale order in the scales of this galaxy and the name of the game is to encapsulate the properties of the turbulence in these tensors alpha and eta so eta is related to the dissipation effects the turbulent diffusion eddy diffusion the alpha effect is related to actually the broken symmetries in the system so as soon as you have something that's helical so as a result of a rotation and in how much you need to a stratification you actually get helical motions that directly connect the mean electric field to the mean magnetic field and hence enable the dynamo mechanism so a little bit of the derivation how do you get this mean field induction equation you actually just write down your induction equation it's top line and then you average it you see all the bees and the bees are split you get some mixed terms there in the curl the cross beat part and you need some properties for your averaging procedure so ideally these would be ensemble averages in practice practical terms these are can be time averages or spatial averages and in these boxes we typically have a horizontal average and there's only a vertical dependence remaining what you need is that the mixed products actually vanish and that's can be seen because you can pull out then one of the averages and then you average with the average of a fluctuation and that's by default zero so when you do this on the entire equation the mixed terms fall out and and you get the curl V over bar cross B over bar that's just the induction of the mean field with the mean flow plus you get the curl of the correlations in the fluctuating velocities in the flag like fluctuating magnetic field so as soon as your magnetic fluctuations actually know about the velocity fluctuations or vice versa then this effect is actually there and it's important can you estimate actually where this effect might come from so to do this you want to express your electric field the turbulent electromotive force as a time integral so you try to predict how your magnetic fluctuations evolve in time this is sort of very hand-wavyly written by this partial time derivative under the integral and once you know this time evolution of your fluctuations you can integrate in time again and you can predict your electromotive force as a function of time how do you find an equation to predict the evolution of the fluctuations this is actually also very straightforward you take the total equation this is a top line here but now note that there's no over bar so this is not the average equation this is just a very bulky way of writing the actual induction equation split into means and fluctuations and below that equation and I'm subtracting it so if I'm subtracting the mean equation from the total equation I get the equation for the fluctuations and you can see there's a lot of terms now because the cancellation effect doesn't show up now because we haven't averaged the equation so they are actually these five terms in the first four ones they can tell you over bar v cross B prime this is advection stretching and compression of magnetic fluctuations by large-scale flow the next one is related to what's called the small scale dynamo or the fluctuation dynamo so there's fluctuations in the magnetic field being amplified by fluctuations in the velocity this is interesting in its own right because you can see the B prime on the left hand side the time derivative is also proportional to the B prime there so this can actually also exponentially amplify the field but this will not happen on the scale of the galaxy as a whole but only on the scales much smaller than the the eddy scales so this is something that's often invoked by cosmologists today who like to do global simulations of discs forming but there are arguments that this will never actually produce any any large-scale coherent fields with significant radial fields you might just wind up the field and end up with with some residual field but that would be of insignificant pitch angle the next two terms it's the turbulent electromotive force again you can see that this actually drains energy out of the large on the small-scale fields and puts it into the large-scale fields again but the really central term is the next one the v prime cross B over bar so this tells you that as soon as there is a turbulent motion of fluctuating motion v prime and a pre-existing large-scale field then this will actually produce correlations in the B prime on the left hand side that are by definition correlated with the v prime so you can see by tangling off small-scale magnetic fluctuations from a pre-existing large-scale field you produce magnetic fluctuations that are correlated with a velocity and that lead to this mean electromotive force as a consequence so this is what motivated Karl-Heinz-Reyedler 50 years ago to parametrize the electromotive force really as a linear function in the mean magnetic field and when you postulate that the mean magnetic field is large scale slowly varying you can actually pull it out of this integral on the top and that means that you actually then you have a v prime and the B prime you see there's another v prime you can see mathematically that this actually makes the kinetic helicity the twirliness of the the helicalness of the velocity flow show up in the mean field equation this is a famous alpha effect that Potsdam is is so famous for. Alright so how do we actually measure this in the simulations and there's been a method coming about I quickly skip this over when I did my PhD and that was sort of a Eureka moment that I had that this is really the way to go and so what you actually do is you solve induction equations if you just take the magnetic field that you have in the simulations can be highly degenerate the gradients can be very poorly defined so it's very difficult to actually extract the alpha and the eta tensors simultaneously from the simulations and so an ingenious idea was to actually prescribe a set of magnetic fields that I evolved passively with the simulation but that are not talking back to the simulation they're just fields that have various you can see here these are just more harmonic functions in space and time with components pointing in orthogonal ways this is true in a way to construct a basis where you have a linear system of equations you see in the bottom where you when you compute the electric fields the electromotive forces you can directly convert these into your turbulence coefficients so the properties of the turbulence are probed by imposing magnetic fields that are varying in space pointing in directions that are non degenerate basis to actually do the linear equation so this comes at the price of having to solve for additional induction equations in the code and these induction equations they take the velocity field from the simulation but they actually they produce magnetic fluctuations that are the result of this imposed test field not the actual field the turbulence itself will be affected by the presence of the magnetic field so if the fields get strong you get what's called quenching so the turbulence effects will sort of die away if the field gets strong enough to actually suppress the turbulence or suppress the anisotropy in the turbulence so this can actually be measured we did this so in a way this is very powerful we have these additional induction equations and we we evolve them taking the velocity field of the simulations as an input and then producing the correlations again of this v prime the v prime of these test fields and you can see here I've written this down as an example for harmonic mode so we typically a fields that were right along the vertical direction if the box is sort of sitting in the disc and there's a vertical coordinate above and below the disc and so far we've assumed that this is a local relation so that the electric field as a result of imposing a homogeneous field is an instantaneous and local point-by-point effect it turns out turbulence is not a local and instantaneous phenomenon turbulent eddies have a certain characteristic length scale they have a characteristic time scale so it means that the the information that there is a magnetic field in the background that gets twirled up and producing fluctuation this is a finite time effect and the time scale also depends on the length scale because of the riches and cascade and so on so the actual mathematical way to formulate this is that the relation between the mean electric field the mean electromagnetic force on the left hand side as a result of a pre existing mean magnetic field is not an instantaneous and local function but it can be described as a as a non instantaneous effect so this is splits the time and the space dependence here you can actually do both at the same time for clarity I'm showing that here you think of in the center think of the mean electromagnetic force as an integral with a convolution kernel that is your your turbulent coefficient your mean field closure and then the magnetic field in sort of a history backward in time so this would be sometimes called a memory effect so the turbulence is aware of what happened at this location in the past has there been a magnetic field in the past at this location or rather the blob that's been affected from that location so there's a certain domain of dependence and the time of dependence and the same you can do in the spatial dependence so the electric field at a certain position is not an instantaneous consequence of imposing a mean field at that location or a mean field being created by the turbulence at that location but it's actually a history of all the turbulent eddies in the vicinity that sort of collect and bring together the magnetic fluctuations from the turbulence in the surroundings and so this can be described as a convolution in space with the turbulence coefficient as a convolution kernel and the magnetic field being probed in an environment and you can see that in Fourier space these have these nice Lorentzian shapes and we assume this as a model and this is actually really nicely met by the data that we have looked at alright and so the tool that we use is this test-filled method where we have the imposed test fields and so this is a simple test case we actually only solving the hydrodynamic equations you can see them mass conservation and momentum conservation with some artificial forcing function this is a helical forcing so we basically put in by hand the effect you want to measure just to see is a method sensitive and you can see here as an example the amplitude of the alpha effect as a function of wave number and you can see it has this Lorentzian shape so this means that it has a finite domain of dependence if you translate this actually to real space is Lorentzian with quadratic in the set in the k means you get an exponential decay so a one over one over distance decay so if you if you go away from a point the influence decays exponentially with distance on producing the electromotive force as a consequence of turbulence same is true actually for the turbulent diffusion that's shown in lower panel and the the coefficients actually quite comparable they're not exactly the same but they're very comparable and so the same thing you can do for the dependence in time so if you have a magnetic field a mean magnetic field that pulsates for example that's what we did with the amonic approach or exponentially grows or exponentially decays this will have depending on the time scale of this temporal change a different sensitivity on the turbulence effects and this is plotted in the upper panel you can see the real and imaginary part so this means you can actually have a face leg of the electromotive force as a result of the mean field so you have a mean field of a dynamo mode you know the large-scale field in the Sun for example or this large-scale field in the galaxies on the planet interior so this is very agnostic of where you apply it to and maybe you have an oscillatory dynamo mode the solar cycle 22 years obviously is a little longer than the turbulence turnover time but in some extreme cases you might actually get these two closer together and you can see that at intermediate time lags there is actually a significant imaginary part so that means that you actually get a get a face leg so there might be a mean field coming up and then because of the turbulent at is sometime later there will be a phase shift that responds in the electric field and the electric field itself will create the magnetic field right so the changes the magnetic field is a curl of the electric field by Maxwell's equations so there's a dynamical coupling and you see if you have time delay effects it actually changes the dynamics significantly so this can actually lead to a change of dynamo cycle period also on so the curves for the turbulent diffusion aspects of the turbulence is similar you can see a slight difference that there's a bit of an overshoot for the for the alpha then the media time so this is more in a way less simplistic in a way so the turbulent diffusion appears to be somewhat monotonic while there's really seems to be an intrinsic time scale for the eddies to actually twirl up the fields and make them helical and so on okay so this has been for a test case but we are actually then also renders on two really real world cases so the one was simulations of the turbulent interstellar medium where we really put this is actually a lot of fun I did this for my pg already we put 10 to 51 erg into a single grid cell and it produces a supernova remnant that the travels through the interstellar medium and produces a shock and then we have several thousands of them that interact and everything gets clumpy because you have a thermal instability in the cooling and their clumps raining down you get the galactic fountain you actually have hot material being blown out of the galaxy leading to galactic winds you have clumps raining down and it turns out that we were actually successful in measuring that this galactic fountain played a decisive role in enabling the galactic dynamo that was actually not appreciated before so the turbulence in the galactic disk is so vigorous that it actually drives a galactic wind so constantly plasma is flowing away from the galactic disk if you amplify your magnetic field in the disk and you blow it away then you actually extinguish this effect you dilute it so the induction equation works the way if you have a positive divergence you dilute your magnetic energy density we've seen that the clumps from the thermal instability raining down they actually drag the field lines down again so there's a pumping effect so this is a topological effect this is the off diagonal elements of the dynamo tensor and this pumping actually counteracts the effect of the wind so you blow away everything but you pump topologically your mean magnetic field down and this is actually what enables the past timescale of the mean field dynamo in the galaxy and makes it work another thing that we looked at was magneto rotational turbulence so there's a magnetic instability in accretion discs this is relevant for the picol accretion discs that Luciano showed and this creates magnetized turbulence and we were wondering does this turbulence actually also have mean field effects such as this producer dynamo and these simulations actually all they all of them show that there's a butterfly diagram just like the one in the Sun and so we could actually quantitatively show that the oscillation periods seen in the simulations while it might not reflect the real conditions in the disks this can actually be explained by this alpha omega type of dynamo and recently we've even also measured the the non-incentanous and the non-local effects in this and so in a way we're building a quantitative theory of accretion disc turbulence and we hope that we can then actually incorporate this also in global simulations and the mean fields dynamo might actually be relevant for producing these large-scale coherent fields that we've then seen drive the jets and outflows in the systems and so this is a theory developed 50 years ago and and we've really systematically confronted with the simulations and mathematical modeling to actually test these ideas that have been developed before computers were actually available for this type of thing and we find that they actually really beautifully agree with this theory so with that I'm happy to take questions thank you yeah for this talk and other questions from the audience or from the what will the people in the zoom the magnetic field should be divergence free how do you manage this on the numerical level I'm very puristic in that regard so there are two approaches the ones where you actually by means of your discredit station have it correct by machine precision so this is called constrained transport so you have a discretized version of stokes theorem on your discredit station element your grid cell so you have a surface integral of the magnetic flux so your magnetic flux is a surface quantity and the electric field is an edge quantity and so you're solving a stokes integral along the edges and that's exactly then if you have been able to tell to two edges always cancel and so by by definition that the curl of the entire volume is then preserved and this actually is from the 90s by evanson holly and if you start with a magnetic field on your grid that's discretized with zero divergence it actually maintains its divergence to the other approaches you you don't care so much about it but you use some cleaning methods so there's some hyperbolic cleaning parabolic cleaning so you allow for a divergence B and you sort of try to dissipate it away and there's quite some arguing in the community obviously the one with the cleaning is more affordable it's easier to do the bigger simulations but especially in the realm of dynamo theory it's been found that it's actually mandatory to use constrained transport because otherwise you might end up with very spurious effects that having said they're actually the these topological things they're related to magnetic illicit and that is actually not very well conserved at the grid scale by any existing code so it's not clear actually how well we we can model these aspects of the magnetic field that the topology cannot be changed if you have two magnetic lines that are interlinked like this you cannot actually just separate them and these things are very difficult to model on very difficult to assess in a numerical way how well you actually conserve these topological quantities the magnetic illicit as a prime one so there's some interesting avenues in the future to actually develop discretization that might have a built-in property of maintaining this topological aspect okay thank you any other question yes the first one is if in your modeling of these passive tracers film I saw just two dimensions away are you really three dimensional or those are three dimensional fields typically we have boxes where the fields point either in the X or Y so that they're always quadruplets as one pointing X one pointing Y and then one of each is a sinusoidal and a cosine so that just to make always safe okay make them non degenerate in terms of the linear equation relating in F with the D fields by terms of these tensors okay and the second question is about do you know anyone who has extended this to a relativistic regime I'm not aware so the I think the most pressing extension of this framework to make it usable for a creation this is really actually there are two contributions in EMF it's a V cross B right a small-scale V or fluctuating V cross a fluctuating B and correlated but at the moment we only model the B prime and we take the V prime as given from the simulation it's called the kinematic or the quasi kinematic but if you have a magnetic instability like buoyancy or the MRI you create magnetic fluctuations that are pre-existing in a way in the turbulence and that way you would actually like to have a mean field approach also in the momentum equation where you actually have as more symmetric approach and there's a framework that's been developed ten years ago by Matthias Reinhardt and Axel Brandenburg they call it the non kinematic test-field method and there you you have you do a trick I mean the momentum equation is non-linear intrinsically right and you can write it down in a linearized fashion so it's in the way it's a trick but you can define a test-field approach to the ponder ponderow motive force as well unfortunately the approach is riddled with mathematical instability so there's some eigenvalues so we're interested in the inhomogeneous part of the equations but there's always exponential growth in the homogeneous part of the equation as well and how do you beat that down how do you still make it sensitive to what we actually want to measure from turbulence and that's been hard to overcome so they just take a mistake and beat down the fluctuations every any turn over time and they claim that they've arrived at a sound solution but it's not obvious but that would be what is really needed to make it a non kinematic and really make it applicable to black hole abrasion discs fundamentally consistent level okay thank you okay so I think you once again on let's say both of the speakers of the first session
|
The interstellar medium of the Milky Way and nearby disk galaxies harbours large-scale coherent magnetic fields of Microgauss strength, that can be explained via the action of a mean-field dynamo. As in our previous work, we aim to quantify dynamo effects that are self-consistently emerging in realistic direct magnetohydrodynamic simulations, but we generalise our approach to the case of a non-local (non-instantaneous) closure relation, described by a convolution integral in space (time). To this end, we leverage our comprehensive simulation framework for the supernova-regulated turbulent multi-phase interstellar medium. By introducing spatially (temporally) modulated mean fields, we extend the previously used test-field method to the spectral realm -- providing the Fourier representation of the convolution kernels. The resulting spectra of the dynamo mean-field coefficients that we obtain broadly match expectations and allow to rigorously constrain the degree of scale separation in the Galactic dynamo.
|
10.5446/57498 (DOI)
|
So now let's start with the first talk of David May from TU Kaiserslautern. David May is head of the research group Top Composite. And this research group is about topology, optimierte und ressourceneffiziente composites für Mobilität und Transport. And he will talk about fiber reinforced polymers half the way, but double the modeling effort. Many thanks for the kind introduction also a warm welcome from my side. First of all, many thanks to the organizers for giving us the chance to give a speech here. Some of you might know that the Institute for Composite Materials just recently joined Leibniz Association just as early as last beginning of last year. So it's the first MMS days for us and I'm very happy to be given the chance to present what we are doing here with our Institute and that's what I'm trying to do for the next 20, 25 minutes. And as you can see already in the title fiber reinforced polymers that's pretty much what it's all about at our Institute. And I'm going to start with giving you a short introduction and what fiber reinforced polymers actually means. We're talking about fiber reinforced polymers we're talking about a composite material composite material is a material where you have two or more material phases combined in a way that they appear quasi homogeneously from a mascaroscopic microscopic point of view. And the reason you do that is that you try to come up with superior performance compared to single phase materials, for example, pure polymers. And the different material phases you can have in a composite material they can originate from different material groups as you can see them here the basic material groups, and or they can have a different shape in the composite. And when we're talking about fiber reinforced polymers we're talking about fiber composites so you have. You have all the fiber shape, giving the fiber composites their names, and the second shape you have is a matrix surrounding the single fibers, and is exactly these fiber shape which makes these materials so interesting for us as engineers. And I'm going to explain that a bit more. I'm going to explain that with the fiber paradox on the fiber paradox on is something which was defined by a guy called Alan Arnold Griffith as early as in the 1920s and what he found out empirically is that if you take a material and bring it into a fiber form, you end up with a much higher strength of the material, mostly towards the fiber direction as the same material in bulk form so for example if you take glass material being into glass fiber form you have a much higher strength and also stiffness compared to the bulk form and he also found out that the thinner the fiber is until a certain range, the higher the strength is for the material. And that for us of course is something which is quite interesting. And you can see here from this image I show here, some typically used fiber materials we have today in composites glass fibers carbon fibers are made fibers that you can see, we do our best to exploit this fiber paradox or we end up with fibers in the range of quite some microns of diameter, much thinner than a human here. The reason for this fiber paradox and depends on the kind of material you're using. Mostly it's a statistical effect, for example, if you take a glass fiber, the maximum defect size is limited to the fiber diameter naturally, whereas in a bulk material it can be much bigger. And if you take carbon fibers, for example, you have an orientation of the strongest molecular bonds towards the fiber direction so the graph it planes, and that gives the materials their high strength and stiffness in fiber direction. Now, while this is quite interesting for us because we end up with materials with a fairly low density but quite high stiffness strength which gives us the opportunity to make lightweight parts for fast cars, fast and light planes, and other stuff like that. It is also a challenge for us when it comes to the field of modeling so you can see the second part of the title, half the weight. There are some certain applications where you can end up compared to metals where we end up with half the weight or even less if we take properly exploit the properties of our fibers. But on the other side it is quite a challenge for us, considering the modeling. What you can see here in the top left picture is something I haven't explained so far. The fibers themselves bring of course some superior properties concerning the mechanical performance but on the other side if you look at a dry fiber or dry fabric made from these fibers. It's not that much different when you touch it compared to the fabric I have on my clothes here today. You put a pressure on it, it simply kinks away, it bends away. If you pull in a certain point from your shirt, you deform the structure, you pull out certain fibers. So what you need is a surrounding matrix which distributes the load, introduces the load in the single fibers and that gives you the stability. We use polymers for that because polymers for example bond quite good to the fabrics given a good load introduction. They are also quite low density. So we can end up with that lightweight design possibilities. And so what you get from a microscopic point of view is what you can see here in the top left picture you have fibers embedded in a polymeric matrix. Now what we're going to have as engineers at the end is build a plane, build a car and of course for that you need macroscopic parts several meters big and you start off all the way with a single fiber with a couple of microns so that doesn't make any sense. So what you do is typically as an engineer today you take a bundle of fibers. A bundle of fibers today when you buy them is several tens of thousands of single fibers in each bundle. Still, you want to have quite a hard time to build a part if you just take one of those bundles. So what you do is, in many cases you combine several fiber bundles in a fabric, moving fabric, for example, just as the moving fabrics you know from the clothing industry. And then you have one sheet of fibers with all the fibers in one direction or maybe two directions. But you want to have a certain thickness you need for your part. So you want to have fibers being oriented in all the directions where you expect important loads that's different to homogeneous materials. When you talk about fiber polymers and fiber-enforced polymers, fiber composites, you have to make sure you have fibers directed in the direction of loads. So what you end up with is a component stacking sequence, a limited material, so you have a fiber composite in the laminate composite. And what you see here is that at the end we have composites as hierarchical materials. So that means as engineers we're interested in what is happening on that microscopic level. When there's a part break when you put a narrow plane at which loads does it break. So you have to carry your process materials like that. But of course that is a direct result of what is happening on the laminate level what is happening on the fiber bundle level and what is happening with the single fibers and the matrix between the single fibers. And that's where it gets quite complicated. I want to give you an example, a specific example and the modeling effort we have to undertake to describe what is happening on these different scales. This specific example is from the field of processing technologies. How do you manufacture a fiber-enforced polymer composite? And one process group which is quite popular nowadays for that topic is the so-called liquid composite molding. And what you do is you take a dry fiber structure, fiber bundles or complete fabrics as I just said, and you bring them into a near-nit shape. So you build a preform which pretty much is a representation of the final part except that it's only fibers. All the fibers are in the position and orientation you want to have them in the final part. And then you take that structure and infiltrate it with a liquid polymer, typically a thermoset material. And then you have a reactive resin system which flows through the dry fiber fabric. We are over pressure. You can inject it into it or you have a vacuum-based process where you have under pressure. And then you infiltrate the part. The thermoset resin system will cure chemically at a defined temperature and pressure. So you end up with your matrix and then you have your final part. And then you have your final impression of how many different process variants we have today. And what a big field this actually is, you can see some images from parts manufactured by this process family, parts as small as bicycle parts. You might have heard of the BMW i3 and i8 which were quite among the news a couple of years ago which were one of the first sealed produced cars which were using carbon fiber-enforced polymers. And then airplanes today are more than 50% of their structural weight made from fiber-enforced polymers. And some of them made also by liquid composite molding. And you certainly have seen one of those big wind energy plants which are standing around on the landscape which have single blades, more than 100 meters long today and getting bigger and bigger. And they're also made by liquid composite molding and you end up with typically a glass fiber-enforced polymer. So it's not a process, single process is a process family, all having in common that impregnation step which we are trying to model to be able to design the process. Now what you can see here on the top left side is a press which we have just installed at our institute. So that's the level we're talking about. We just recently installed a press to manufacture parts which has a closing force of two and a half thousand tons. And then you can easily imagine if you build a part for automotive for example. And you have a process variant called bed compression molding where you take your dry textile structure, spread the resin on it and then push it into the textile using that clamping force. And you can imagine we end up with injection pressure levels which are very high. We have very complex textile deformation effects because the textiles themselves are a court nose. They're not rigid, they're deformable. And so the fiber structure is changing, it's conductance for the resin flow is changing of course. And then on the other side you have a resin system, a thermoset resin system, which cures and curing means that you have a molecular network which is growing during the process. So the resin changes its properties depending on the time, the temperature you have at the single points. And this all takes place if you talk about industrial process like they have been used at BMW in a couple of minutes. And there's literally no room for errors. So that means you have a thermoset resin system. Once it's cured, it's cured. And if you didn't manage to completely fill the part until it's cured, you can basically throw it away. There's no possibility to melt it again. Recycling is a huge topic of research in the field of color where composites, which is not fully sold so far so basically you have produced scrap. So what we want to do at our institute, the main task is to find a way how, if you are at the macro scale on the right side on the component level, how can you ensure that the part is fully infiltrated without any air inclusions at the end of your process before the resin has cured so far that it cannot flow through the fiber structure. And you can see here it's quite funny. After the first two talks we've seen today for us macro scale is the component scale not burger cluster or something. And you can see the difference is there and for us micro scale is what we refer to as the fiber level. So that's the point of view from some of the audience probably quite narrow spange for us it's quite large scale, we have scale differences we have. And what I want to say with that slide is, actually what we care about is the component level flow. We want to see if you have an invisible tool or think about invisible tool. How does the flow front look like. And if you want to model that of course you have to take into account what happens on the scales below. What happens on the textile level where you have flow and flow channels between the fiber bundles and these flow channels are somewhere in the range of millimeters. And that's all taking place together and all teacher mining if you get porosity or boys in your point. And then of course if you think it further. We have molecular effects because the resin cures on a molecular level which you would have to take to account all to understand what is happening on the component level flow. So at the end as most real life for them are we have physical phenomena relevant physical phenomena operating across several scales in space and time. And our challenge is to cope with that and find to try to find a proper way of designing these processes because at the end what we want is to know which pressure should we use to inject the resin, which tool temperature can be used to interfere with the resin viscose where to put the injection points where to let the air out of the structure. And that means we need support for the tool design and in most companies today this is done by trial and error. And that's okay because in most of the times liquid composite molding is used for smaller batch production small series media systems and by smaller companies. So they of course take a trial and error approach. And if you look at companies like of course, elbows BMW they want to have simulations because they have large batch production so what they want to have is a supportive tool and process design which is based on a miracle simulation and that's exactly what we're trying to do at our Institute. We start off of course with macro level that's what we call macro level component level simulations, finite element simulations for example with the software visual rdm from easy could also use as diner or open foam. Today you can see, by far less complicated equations than those we have seen so far. And from that already you can see what we all neglect on complexity today. What you can see here is Darcy's law Darcy's law is a special solution to neighbor Stokes, which is valid for certain conditions, for example, laminar flow which are in general accepted to be valid for liquid composite molding. And you can see it pretty much correlates the flow velocity of the resin system and the fiber structure with the pressure gradient you have and two relevant material property, the one being the viscosity of the resin system so the flow ability of the resin system. And the other one is the so called permeability of the fiber structure, which is if you want so the conductance of the fiber structure towards the flow was developed by Henry Henry Darcy French engineer who was investigating in flow of water through sand. And today it's the standard equation used for simulation of liquid composite molding technologies in the state of the art today is that we use experimental input for that macro simulation so what we do is we have reometers, reometers allow you to learn about the time temperature dependent behavior of polymers. During the curing reaction, and their standards how to measure that that's, I don't want to say easy but that's the easier part of it on the other side you have permeability measurement, which is pretty much my research background experimental permeability measurement where we do not have any industrial standards today and how to measure, measure that. And what you have to see is that permeability is direction dependent so it depends on which direction the flow takes with the fiber structure. So we end up with a second grade tensor, and it depends on how much you compress your fiber stack so if you press your fiber stack together you change the structure. We saw you increase the fiber volume content. And that of course, strongly interferes with the permeability of structure so what you need is a whole bunch of measurements for viscosity and permeability. And so if you have a specific model of you, give them information on what porosity is has depending on the component thickness at that specific point spot, which orientation it has of the fiber structure, and then knowing the permeability values you can calculate the flow through the structure, and for each volume element of resin going to the material you log the time and temperature history so you know at which point of the curve you are. So you can say which was cause it is has. And if you combine it straightforward the problem is if you imagine a flat fabric textile woven fabric which you have to bring into a three dimensional shape and unfortunately most parts we have in cars and in planes are not flat, they're three dimensionally formed and glass fibers and carbon fibers do not have any ability, not any usable plastic deformation. So compared to metal sheets where you can form deep drawing or something like that you cannot do that with a carbon fiber textile would just break so what you have to do is you have to form the structure and form it into that three dimensional shape, which of course completely changes its structure. And that is completely neglected. Most of the times today, what you would need is you would need measurements for different we call it shearing angles. So different deformation states of your textile different fiber volume contents and then you would end up with a huge experimental program. And that of course leads to the case where you have to either reduce your test program which reduces the prediction accuracy of your simulation, or you have a huge experimental effort, and nobody wants to pay for it. And you end up again with the same problem. So what do you do about it we as researchers are of course dreaming for the case where you have a full numerical workflow, don't need any experience man. So that's exactly what we tried to do a couple of years ago still trying to do. And if you want to bypass experimental effort of course you start off and thinking, we need to use data, which is as easily accessible as possible. And what we do is, we use data sheet material, which you get from the textile manufacturer, which tells you how many fibers you have on certain area, stuff like that. We use today microscopy, which gives you information on the fiber bundle width and height. And then we build up models for again the different scales I've just shown you and for obvious reasons, it is not possible to have a macro level simulation which is resolved down to the fiber level, because you have millions of millions fibers in a single part so what you do is to cope with a computational scale, you separate the scales. What we do is we have a macro level simulation, basically the same as before but we do not feed it with experimental values anymore. We do it for the viscosity, but for the permeability we replace it by simulations on textile level and micro level. And you can see, even if we have that me so level, a single unit cell from the textile. And then we solve down to the fiber level we cannot build up the models because still you would have millions of fibers in it. So what you do is, you start off with building a micro level simulation which looks at the permeability in a certain fiber bundle. And then you take the permeability from flow simulations and allocate them to the fiber bundles and the textile level simulation, which are pretty much defined to be solid porous solids with a homogeneous permeability. When we build up these models, one of the most important things to do is you have to cope with the statistical variation of the geometry you can imagine if you have thousands of thousands of fibers you have all the variation possibilities allocated to them. So what we do is when we look at the 2d microscopic images, for example, we do not have one certain fiber bundle with, but we have a range in which we see the fiber bundle with berries. So what we do is we do not build up one textile model we build up dozens of textile models, which randomly vary within the defined range so we have a varying, varying fiber bundle with for example and height you end up with, let's say a statistical volume element not a representative volume element and each of these single elements is unique so if you simulate the flow through them and do that a dozen of times you get not only an average value of permeability but a variation which gives you an impression on the variation of the permeability and these values you can then give to the micro level simulation. As you can see here, we're using a software called Geodic for that you could also use open foam Geodic this from a company called Math to Market, which happens to be a spin off from Fraunhofer Institute for industrial mathematics, which is also located in the ground just like we are. And so we have some good context to them, and you can see it's pretty much based on Stokes flow on the major level we have Stokes Brinkman the Brinkman term accounts for the flow into the porous solids so it's flow through a porous medium, where the solids are also porous, in this case the yarns. Maybe interesting, and you might be wondering why we have that virtual stacking and compaction step. Different thicknesses in our part so we have to we need values for the permeability for each different thickness we have so what we do is, we build up the individual layers stack the individual layers and then we compress them. And you might ask the question why we not simply model the individual layers to have the thickness we want to have in the final part and that's actually what we did when we started off. And what you get then is a model where you have perfect elliptical shapes of your fiber bundles and used to make the joke that we had a project back then and project title was something like digital twin for composites and I always said, it's not a digital twin it looks like a digital cousin third grade cousin or cousin or something. And the reason I said that is because if you look at the cross section of the textiles. It's far away from elliptical shape what you have is really complex formed bundles because you can imagine thousands of fiber bundles have a very complex compression behavior. And if we have that stacking with a parameterized randomized offset and we compress it we get closer to. So how good does it work. If you have a look at here we can see a real life image on top of CT image on top and then you can see the digital twin you see the evil word there on the bottom and you can see it looks fairly good looks quite similar to the real life structure we have. The problem is most of the cases the permeability is still maybe about an order of magnitude away from the experiments we performed on the very same material. Now, as a scientist of course the first approach would be to optimize the simulation model. In that case it was a quite application oriented project, sim project for the funding experts among you. And that means we were, if you take the life in its premise to recompraxy in that case it was more on the proxy side so that meant we were looking for a very application oriented solution to it so what we did is, if your simulation doesn't work. Of course you have a fitting procedure. So what we did is we simply took those geometrical parameters of which we know that they are with the most influence on the permeability that different directions and adapted them until we had a perfect match to the permeability at a certain fiber volume content. And this you can see here in the top right diagram. This is for a certain fiber volume content 50% fiber volume content. We reached a very good fit in all flow directions for our model. Now the interesting thing is, if you now take this calibrated model and increase the fiber volume content so further compact the material, you still have a fairly good fit. And you might say, plus 30% is not a fairly good fit for as it is I just recently organized an international benchmark study on experimental permeability measurement. And we had variations of 20 to 40% for the same methods for the same test methods so that's what you can expect today from experimental methods for permeability determination. So we can at least get in the range of accuracy of experimental methods, we can cut out two thirds of experimental effort by replacing them by simulations. And that's the way to go for this application field. Of course, that's nothing we're really satisfied with, because we want to find a way to have models which are close to reality from the first place. And that is also something we're currently trying to do what we're doing is, as input. We do not use computer generated models, but we use real life structures. Naturally, you would think with you use real life structures you get closer to reality. So we used our new IVW is 3D X ray microscope performed a scan. It allows us with all CT scans we had problems to do the similar density to resolve for carbon fibers now with that new machine it's able to do that so you still want to separate the scales because of course that still doesn't solve your computational effort method problems. But what you do is you have a look at the fiber bundles have a simulation model, then you again allocate the permeability through the textile scale model, and pretty much the same. The only problem is, of course, that the effort to make these scans is quite high and the samples, of course, are quite small so you still have that problem with statistical variation. So how good does that work. Unfortunately, it's not as easy as one could think it is. It's not as easy as I thought it is when I started international benchmark study on this exact same topic. And we started together with cowl oven and it calls in trinand. What we did is, we didn't even go for the textile level we just go for this single 400 fiber unit cell from micro scale so flow within a fiber bundle segmented images and we send around to 16 participants all over the world working in this direction and we asked them to perform the method for flow simulation and permeability of derivation from which they think it is the most accurate one. And you can see here we have huge variations between the partners in fact, out of the 16 participants we asked to perform the simulation there's not a pair of two would use the exact same methods to perform this task and there's some obvious differences but problems with computer capabilities so what they did is they had a very core smash, they applied and that of course changes the fiber volume content, which of course changes the permeability that there were other differences we do not even understand so far by we have them. So, what is the current status of liquid composite molding simulation. You can even have a very high experimental effort, which still gives you results, which are often questionable. A full numerical workflow doesn't work because our modeling tasks are not fulfilled perfectly so far and scanning parts doesn't really work either has its own problems by now. Not really in. I'm not really able to tell you where it's heading we're researching all three fields for now and hoping to find the best way maybe a combination of ways at the end. That is like an example for a typical challenge we have today where we say double the modeling effort we always have to cope with these different scales we are working with. Now of course that's just a small section of what I've been able to use all the way is currently doing as a research and what you can see here is the manufacturing capabilities of I've been able to and just want to show you that slide to show you that. This is just one field but the problem we have with that tyranny of scales as it is often referred to in literature. We have for all the processes of course we're working with. We have heard about induction in the last talk and induction is actually also a task we have at our Institute in terms of induction welding you can induction heat a carbon woven fabric to weld it. For example, if you have a thermoplastic metal matrix with which is multiple and no matter if you need the electromagnetic properties of a carbon fiber weave no matter if you need the flow properties of a compound from short fibers with fillers and a polymer, which is used for injection molding for example, you always need to find a way to describe the material behavior over all the scales to be able to have a working process design modeling in. Composite materials today are considered as a quite young material research group. Of course they have been around for several decades and industrial applications, but still compared to other materials. There's quite a lack. We have. And it's of course not only about process simulation what we also do is of course structural simulation of parts and you can see here it's the same game we have. How do micro models, how does cracks appear in the matrix on the micro scale level how do they grow over the meso scale the textile level and how does that interfere with the final part properties fatigue of course is a quite complex area what happens to a part, which does not actually break from a load, but the load is repeated millions of times how do you simulate something like that. In that case you don't only have to cope with spatial scales but also temporal scales, which are widely different. I want to give you another example. On the complexity of the scales in our case, and easily the most complicated field to probably deal with is the whole field of non destructive testing how do you find out which damages you have an apart now do you model these damages. I've been talking about the single fibers fiber bundle level textile lemonade level part level. And if you look at typical defects you can have apart cracks and single layers impact defects, you can we have a lemonade composite so one of the biggest problems you can have is delamination single layers, the bonding from each other. The problem is that these structural defects, which can have a huge impact on the properties at the end are in the same size scales as the structural properties is the structural components so you have micro cracks, which are as big as a fiber, you have impact on the properties as big as a lemonade scale and at the end, the part breaks when you have macroscopic cracks, which can span the complete part size. So we have an hierarchical material and a hierarchical distribution of material defects, which of course is then accordingly complex to investigate. That is of course also research topic we're working on you have x ray tomography as I've just shown, we can look at single fiber damage. And of course, we're only looking at a very small, small field and then on the other side we have ultrasonics and acoustic emissions, which is also used in industry today where we can have a look at delamination impact effects which are in the range of a textile layer thickness. And if you have a look at stuff like that, you can see that, of course, if you have an hierarchical material like a composite material. It's complicated to apply stuff like acoustic emissions because the hierarchical nature of the material. It has a huge impact on the damping properties and the distribution of sound waves, for example, and you have to understand how the sound waves propagate in order to understand the acoustic in emissions images you have there. And then on the further right side you have introvert thermography which theoretically gives you the opportunity to have a look at a complete part. So we have a limited penetration depth. So we have the same problem here. Theoretically we have experimental methods which allow us to cross the scales. Nothing usable for all the scales but we have to use them all to really understand what is happening. And that of course for us is only possible if we have modeling technologies, which we have, you can see that fiber level arrangement again which is the micro level simulation, finite element in that case to simulate the acoustic emission behavior. And then we use analytical modeling for the complete part behavior. So whatever we do, we always have the challenge that we need to understand what is happening on the different scales. If we want to simulate the macroscopic behavior, we need to find ways to describe the behavior on the micro level and we need to homogenize we need upscaling methods homogenization methods, which do not have too much loss of information. In most of the cases we're still struggling with doing that and that means that simulations in the field of composites are today, often not fully functional, meaning that they either take too long the computational effort is too high, or the prediction accuracy is not good and let me give you a small teaser at the end of my presentation it's a small amendment but I wanted to have these two slides in the presentation because it's really quite a perfect fit to the MMS days, because this is a topic we just recently started working with at I VW and we are only being able to do that because we have been part of the Cypness Association. And what I'm talking about is machine learning as one possible solution to get rid of that clash between accuracy and effort you always have the simulations and we want to put more simulation physics inside to be more accurate, then we have a more computational effort. And one possible solution that might be machine learning. You can read about papers we have three orders of magnitude. Faster simulations of course we thought that's something interesting for us and we applied together with my stress institute, and the Leibniz Institute for Polymer Research and the colleagues from Fraunhofer and German Research Center for Artificial Intelligence for funding for the Cooperative Excellence funding scheme which now gives us the opportunity to have a look at possibilities to use machine learning in order to speed up our liquid composite molding process simulation. That's exactly what we're doing. I'm hoping to be able, or one of my colleagues to present some results to that project at the next MMS days. I'm just recently started at the beginning of this year. But as I've said, I think that's a good thing about MMS days MMS network we weren't even a year we have been a member of Leibniz Association had the opportunity to start a project like that. And for that, it really shows the benefit I think from Leibniz Association as an institute for us. And with that, sorry for being too long in time probably. Thank you to the funding agencies and thank you of course for listening. Thank you very much for this interesting presentation. And now it's time for questions. Okay, yeah, as a question for the permeability and principle it's a tensor right. Yeah, so but measurements and also from the simulation I see you have also only the diagonal elements right. It's a second grade tensor but the nice thing is if you produce textiles you get some symmetric conditions. So if you look at a woven fabric you have two main directions and that allows us to make it easier most of the time we only differentiate between highest and lowest in plane permeability and the out of plane permeability. So when we do experiments we have in plane test breaks one dimensional and two dimensional and out of plane test breaks. And for the numerical part when you you compute the permeability just by inversing the Darcy equations or to say, and then you you put a pressure difference between two sides of your. Yeah, that's how it's typically done. Yes. On one of your slides you have a graph for the permeability permeability and you measure the permeability in square meters. Yeah, this is, you can see it as some kind of hydraulic radius. Yeah. So, like a cross section. Okay, but if you measure the relative law on the left side there is a velocity and there is a gradient of the pressure and then the coefficient if you measure it in the permeability if you measure in square meters. Is consistent with the consistent yeah that's what you get you have the milliseconds for the viscosity and then you end up with with the spring. It's a typical question. Now there are further questions. Maybe from zoom. Okay, so let's thanks the speaker again.
|
Fiber reinforced polymers (FRPs) are ideal lightweight materials that can play a key role in, sustainable mobility, harvesting of natural energy resources, assisted living, and cutting edge medical technologies. However, high production costs are often still a main obstacle for broad industrial application. Robust design of the manufacturing processes and efficient analysis methods are an essential requirements in order to achieve economical implementation. Today, this is usually enabled by numerical simulations of the material behavior and their manufacturing processes. In this keynote talk, we showcase the areas of fiber reinforced polymer composites where numerical simulations have been implemented in order to gain a deeper understanding into their processing and physical behavior. From simulations of FRP manufacturing processes, to virtual material characterization at the micro and meso-scale as well the design of smart structures and their analyses using novel techniques.
|
10.5446/57499 (DOI)
|
So our next speaker is David Neuter from the Suss Institute Berlin and he's a postdoctoral researcher in the Mardi team and he will be talking today about the Mardi portal for mathematical research data. Yes, thank you for the introduction. Nice to be here. So this talk is not about research results but more on a community service that we are developing in the Mardi portal. So this is part of the Mardi project specifically of the task area five which is posted at the Suss Institute and at FHZ. So what do I mean by portal? It's really a wiki, a web portal based on media wiki-wiki based technology stack and known from Wikipedia and wiki data which will allow us to access research data and to find an access research data from a variety of sources, repositories that are right now existing but in many different locations not connected to each other and not communicating. So here we are putting these things together and thus try to, well this is an important step towards implementation of FAIR principles for research data. I'm going to talk about what this means in a second. First some context. So this is part of the national research data initiative that's managed by DFG and started a couple of years ago. It consists of 30 consortia representing the German scientific landscape and the aims of the project are to create a permanent digital repository of knowledge in which research data is made available to researchers and according to these FAIR principles. So what are these principles? FAIR principles are guides for data management, research data management and stewardship in the digital age specifically not only suited to human users but also to machines so to automatize applications for example such that these can also find, explore, access data and automatically. So FAIR stands for findable, accessible, interoperable and reusable. This means that data has to be, has to have rich annotation with means of metadata such that they can be easily found also by machines. When data is found it needs to be accessible so there have to be protocols that ensure that the data actually can be accessed. Interoperable means that since data is part of some data workflow, work progress, there need to be a common language of data, how the data is described by metadata such that data from different sources and tools can actually be linked and understand each other. And finally the ultimate goal of these FAIR principles is that data is reusable, that other researchers can work with data ones that have been published from another institute for instance and that researchers reproducible which is a big issue in data and tense fields right now. Okay. So the NFDI is currently made up of 19 consortia from engineering sciences, natural sciences, humanities and social science and life sciences and the third round of calls is currently undergoing and by start of next year there is going to be a total 30 institutions. So right now it's 19 than 30. One of these consortia is MARDI which is the mathematical research data initiative representing all of mathematics. Its mission is to create a robust infrastructure for mathematical research data and to set standards, confirmable workflows or certify data that is trusted and validated and also to provide services for the mathematical community and also for the wider community. It also has a vision which is formulated to build a community that embraces the FAIR data culture and research work. So just briefly on how the MARDI project is constructed. So there are four task areas which are from different sub-domains of mathematics. So there's computer algebra, scientific computing, statistics and machine learning and also an interdisciplinary task area which works together with other consortia and these try to implement the FAIR principles in their respective sub-fields which have different needs, different requirements, different kinds of data and all these or their findings are going to be implemented in the same way which we're building and then there's also task area for data culture and community integration and governance. So what is mathematical research data? It's not only numerical data tables of numbers which one might think at first but also mathematical expressions and formula, mathematical models, software codes, implementation of algorithms, 3D objects, visualizations and of course documents also and possibly much more. All these have to be considered in this framework. So the current situation is that there are already a lot of interesting and very powerful services and data repositories for different communities. For example there's OpenML, Morviki, ZBMath which are important but they're not connected to each other so they perform a service for a certain community but that's it, they're like they're placed in a silo where they're all disconnected. In order to harvest really the potential of digital data and digital data repositories this project wants to link this knowledge and to make it all accessible also between services. So this part of it I'm talking about will be a unified gateway, just a centralized solution where we can access the data from all these different sources. And like I said for machines and also for humans. And then there's a couple of services that are planned that are going to be implemented within this project like for example knowledge graphs, databases for an American algorithm, algorithms, model database, benchmark framework, repositories for computer algebra and many more. And also there's going to be external services like Zinodo or the digital library for mathematical functions which are going to be integrated within the portal. Just to describe how this can be useful, very simple use case say we want to start working on an American method for example the NIO Solva, like maybe create a new variant of GMRES then we could go to the portal and explore it for all the variations we have of that algorithms for example the database would give us the algorithms, their relationships, we get all the publications on the topic software implementations, software environments, actual code within virtual containers maybe, test and performance data. And then there's the actual data sets that we can work on, data that has been certified, that we can trust, that we can use for benchmarks. We can see who are the experts in the field, whom should we contact if we start working on that topic and then there's going to be the services also that are being provided by the community, benchmark framework for example, but maybe in the future possibility to remotely execute algorithms and workflow and data storage. So for the technical side the portal consists of a knowledge graph really which is based on wiki base which is used by wiki data for example, it is very proven and widely used and highly scalable, also widely used by the other consortia of the NFDI which we're going to work with. This knowledge graph will only import metadata so we're not going to import all the data base in the world but try to get the metadata, leave the data and the actual repositories but link to the data. And there's some requirements that we have to make in order to account for mathematics so there's an anthology of mathematical data, what are the relevant properties that mathematical objects have, what are the data types, so this has to be accommodated. They're going to be persistent identifiers so that the data is actually citable. Also if the real data is deleted we can still access the metadata and so forth. There's going to be advanced mathematical search functions like formula search is already implemented, we're going to have a talk about this after mine. Advanced filtering for mathematical properties in the search, for example we want only data sets for symmetric positive definite matrices and we can filter for those. The knowledge graph is generated in an automatized fashion from created sources that we trust and so there's no need to put in data by hand but this is all automatized and in addition there's going to be APIs for our partners such that they can also insert their data. So some more features like I said that's going to be a user friendly interface and also possibility to do advanced queries to the knowledge graph by means of the Sparkle query language. There's going to be a machine actionable interface, we're going to provide an extensive documentation and best practice guides and the knowledge graph is going to be integrated with the overarching knowledge graph of the entire NFDI so there's going to be links and interactions with the other disciplines. Finally in the future there may be a distributed computing service such that also codes, algorithms, workflows can be directly executed through the portal and act on the data that's stored there. Okay that's it already, expected launch date is end of the year, there's already the initial version available on the link where you can maybe track the progress but until then thank you for your attention. Thank you very much, now it's time for questions. Thank you very much for your presentation, you talked about data that you trust which kind of raises the question on what basis can you decide what data to trust. Sorry I think it's a bit provocative question but yeah. Well I'm not going to decide that actually, but it's a question of the domain specific task areas I think they will provide criteria or provide the actual data also that they know they can put but I don't know how they will decide that. Okay thanks. It's a very important question obviously. My question is on data, the amount of data storage that we heard in the presentation from Professor Rizola, data in the order of pentabytes, is it anticipated such size of data will also be possible to store on body? Because we're not going to store the actual data, we assume that data is already in some repository of some group, some institution and we just import the metadata, so the description of the data, so we can find it because we have all the information that describes the data, what is there, how is it stored, what are the properties and also the link to the actual data and the information on how to get the data from the source but we're not going to copy the data through our site. Okay, thanks. And how is the metadata unified? I mean, when you say there are already the repositories, so they might have different kind of metadata, how is it now then realized that you have this top level and that you get all the information or that's not different if I look for the information and find something and then go to that and it is different in every place, how is it managed? Well, part of the repositories, the ones that were on the one slide there, they're also part of the Mali project, so I guess they were also, they're working also on providing or using metadata that interacts or links. On the other hand, we have to, well, that's part of the, well, there's some universal languages that can be used to describe metadata and also translators, so there's some kind of flexibility but yeah, I think our one part is with partners that we can work with that who can maybe be coerced to make the data such that we can use it and the other ones that we cannot, that we just have to accept and try to use the most universal way to describe. Yeah, a major problem with when the data is stored elsewhere or distributed among all sorts of repositories is Linkrod. So where, I don't know if in sometime in the 2030s, Margie is live, it is well accepted, people are using it and I don't know, maybe I'm looking for an interesting project someone did a decade ago and the data is supposed to be on some university servers and in the meantime the IT infrastructure over there has changed and the data no longer exists or somewhere else. Yeah, are there strategies in place or planned for that sort of maintenance to monitor disappearing data or automatically or manually chased on changing locations, that sort of thing. Yeah, there's no actual details up to now but we are going to implement a data lifecycle management so we are aware of that and we are going to track the data that is still available or not. That's part of the plan. Also the, well sometimes it's clear already from the beginning that the publishers for example they can guarantee how long the service is going to be available. In other cases it's not but the idea is that at least the metadata is always there and I'm excited even if the data has been deleted. But of course this has to be done in controlled manner and not just that the data disappears without that people know. But yeah, that's going to be done. Okay, let's thank the speaker again.
|
Mathematical research data (MRD), arising in many scientific fields, encompass widely different types of data and can be vast and complex, e.g., numerical data sets, mathematical expressions, algorithms, etc. The NFDI MRD Initiative (MaRDI) aims to define standards for MRD, to design verifiable workflows and to provide services to the scientific community. The services will be bundled in a web portal, allowing researchers to easily find and access mathematical research data, knowledge and services. In addition, the portal will offer storage capacities and host services for workflow and algorithm execution. At the core of the portal lies a mathematical knowledge graph which organizes and interconnects data from multiple sources. The main contribution of the portal is providing a unified entry point to access scattered and unconnected data. This talk gives an overview of the current status and planned features of the portal, and its value for the mathematical community.
|
10.5446/57500 (DOI)
|
And now we come to the last presentation of today. And this is a presentation by Johannes Stegmüller from FEZ Karlsruhe. And Johannes Stegmüller is also a PhD student at University of Göttingen. And he will be talking today about overview and the discussion of the progress of mass search technologies. Okay, hello, my name is Johannes Stegmüller. I'm from FEZ Karlsruhe, as I said, thanks for the introduction. And this presentation will be about mass search. And I also plan to have a discussion afterwards about the mass search technologies. So I will explain also during the presentation what mass search is. So I'm also from the Mati portal, which was mentioned before on David. So this will also, I will also show a presentation a bit about the portal. So yeah, this is it for the basics. I hope I can speak to this slide, yeah. Some presentations have been over space now, like going to space. And now we go back to the ancient Egypt. And what we see here are symbols. And most people won't know what the symbols are about, probably here. Or maybe somebody would have guessed that a frog represents 100,000, or this plant represents 1000, like on the left. That's a plant and it represents 100,000. Probably, and imagine being in Egypt, how did people know that a symbol represents that? 500 years later, how did people know what these symbols are like? And the 21st century, mass search was invented. So this is actually not related to that. It's not completely related. But with mass search, we can have symbolic representations of formula, and we can retrieve contextual information. And I will show what it is about in the next slide. So the content of this presentation is introduction to the topic of, oh, it goes automatically. So introduction to the topic of mass search, mass search methods and applications. And then we have in future outlook, what happens to mass search in the future. And then maybe there will be a talk involving the audience if there's some time left. So how does mass search work? And now that's the basics. So it retrieves documents by specifying a formula or mathematical representation. The representation might occur in some format in text or metadata of the documents. You will see what it's about. I have an example here from ZbMath Open. Some of you might know this since you're a mathematician and it's a reviewing service for mathematical documents. So you see here, that's the search interface of ZbMath Open. And I hope you can all read it in the search field. There's an equation. It's the binominal coefficient. And typing that equation and pressing the search button, this gives us here some documents in that service. So these are documents and they are also available as PDF. And in one, I think it's the first search hit, there is the binominal equation. So it's written in that document. So that works actually like a regular search. Like you all know, probably. But the difference is that in that search field, that there is a formula denoted in its form. So formula can be written directly to text, but it can be written like in latex for example, or in other formats we will see. And this is the specialty of math search. So see that's what happens on the so-called front and if you're from IT. And what happens in the background actually? So what does this math search engine computer systems do? So you have here a set of index documents. This is the pool which is searched to the search engine. And then this can be abstracts of scientific publications. It can also be the text of scientific publications. It can also be website content. For example, some of you might know math stack overflow or math overflow or something like that. This could also be indexed. And yeah, in an abstract way, and then our application example from the portal, it could also be entities in a knowledge graph. So from these pool of documents, a so-called index is created. But that's the search index. It's a tree-based structure. Some of you which have worked with search engines, they probably know that an index is always created for search engines which indexes documents. And that's practically very similar here. So it's often a tree-like data structure. It's often also, there's another index structure which represents the text accumulated to it. It's not that important at the moment. But yeah, we have that search index. That's important for now. And then we have a query. So that's a math expression. And the math expression is denoted in latex, for example. Here you see also the binomical coefficient written in latex here in the parenthesis. So this is passed somehow by the math search engines. And the index is queried. So we have the index again. And then what comes back is a ranked list of search hits. So like Google, the most similar result should be the first result, then the next similar, the second result and so on and so on. Just that the similarity is now about math equations and not about just text. Now I have to be a bit extended a bit. So this is, there's some formats which have to be known. So data formats we're talking of. And you see that equation here. It's a new one, for example. So it can be written in latex. So that's the representations in latex. That could be multiple representation, for example, AX square BX and so on and so on. That's latex. Probably you all know everybody who written a public, a scientific publication knows the editor or knows the typing system. So that's also not a format. I have to explain it to explain then math search further. So it's called math ML and it has two variations. Presentation math ML and content math ML. So the first one is focusing on the visual representation. So it's like XML a bit, but it has, for example, it marks up content which is, for example, it says that time symbol and A times X is written in, it's invisible. So this is giving information how the formula should be represented visually. And another format that also has a header is content math ML. And this is the symbolic representation of a formula. So this is hard to read for humans, I would say. So humans usually they would rather read latex or something like that. And there's also this format, but this is for the machines. And it's a hierarchical representation of that equation here. I will come to it in a minute again. So yeah, the presentation will be also publicly on it. So you don't have to, I won't explain every detail of each table. So, but yeah, latex invented in 1985 is for publishing in print and math ML, 1999 for publishing on the web and applications. And so there's these two variations of math ML and math ML and latex are often used in math search technologies. So I will come to that. So I will explain now, I mentioned that there's a so-called index created, but how is the index created? So we see it here as an example, there's multiple search engines and they all have variations on that. But I'm using here math web search, which is also used in ZbMath Open to explain the indexing because it's very basic. And I also very simplified it because it would take too much time to have to explain detail. Yeah, a tree is formed. So we have here an example. So we would index a document or multiple documents with six formula. So they contain six formula written somewhere. And these formulas are here visible at the leaf nodes of the tree. So these are here at the bottom. There would be a function of x and x, a function of epsilon and epsilon and so on. And they all have the same attribute. They can be represented as a function on the top of the tree that the root, which is a function of x1 and x2. So if you substitute then x1 with an x and substitute x2 with also an x, then you would be at the left side of the bottom leaf of the tree. So this would be one entry, which is indexed in this index and that's a visual representation of a data structure. So this is just visualized, but in the math search engine, it's a data structure actually. So seeing that, for example, a search query would come in, denoted in content math ML, and that math ML is a hierarchical representation. So it's traversed in the order of the substitution tree. It corresponds to the order of the substitution tree, that denotation in the math ML. So that's very helpful and can lead to fast results of the search engines. And if there's any search hit, then the document is linked in another data structure, which is written here. So the document linked in. But yeah, it doesn't have to be understood in detail to get this presentation, I think. It's just for the overview. So there's math search engines. And math search engines already mentions, these are the basic technologies which contain the algorithm for math search. And they also contain, for example, how the indexes are created. So with math search engines, such web services can be realized, which then realize math search practically. So these are the thing running in the background, basically. And I have here a table, which is quite long, but it's also very comprehensive. I would say about recent math search technologies. And it will also be in the presentation because it's lots of content. But we see an overview here about contents, which is also publicly available. So this has a focus. There's more technologies about that. But these technologies can be used for public infrastructure. And I think besides one of these entries, they don't cost anything for public infrastructure by licensing. So they have all like MIT license, RGPL, or something like that. So, and most of them have also public repositories. There's evaluations for that in a minute. And there's some example applications. Lots of content. That's why I just skipped that. And you can look it up then in detail. But there is these engines, and some of them are publicly available. And most of them have been evaluated. That's why here's already an overview about the math search evaluations. So math search evaluations started 2013. So with the MTCIR task. So this is about math retrieval. And it evaluates the precision. So, practically checks the similarity of fetched results in the ranking list. And how different engines work. So there's lots of relations. And you see here also, scientific publications were used as data. And also in the content in the later years, the Wikipedia articles have been used for other tasks, for similar tasks. And these days also as sources, questions and answers for mathematics, stack exchange have been used. So these are web content, but web content as we saw it can also be indexed. So that's about this evaluations. They are all linked also in the references. If you have to check a math search system, how precise it works, for example. Yeah. Now we have seen lots of tables. Lots of theoretical content. So I will go to a demo now. We also as the MADI task area five, have already implemented a math search system in our portal. So this extension is working on basis of media wiki extension, created by Moritz here, 2012 or 2013 it was. And was modified to run. It was modified to run it on media wiki. I hope I have here presentation. Oh yeah, it's already bookmarked here. Great. So, for example, I feel that that's the MADI portal actually. And we can hear by specifying basics now, search the wiki pages in this portal, form of expressions for that latex pattern. Pressing it brings us to result. So there's a project page, which I created actually for testing. And this contains the formula. So wiki pages can be found. It's the first step. There's lots of challenges to that. Here, for example, automatically creating the index. That's what we're currently working on. So that when wiki new wiki pages get created, also the index is recreated and so on. But yeah, basic example is already working. And this will also produce in future, a media wiki extension. Most probably media wiki extension can then be used in any media wiki. So that's also our goal to make that content reusable for everybody. So, yeah, we see now the content. We're in progression. And now, since every, my search engines are quite abstract, I will show also, massage is not only about the engines running in the background. It's not about the usability of the front ends. So how can users perform off such trivial thing. So I will show some of such system, which are already available publicly and what their content is. For example, we have here one massage engine and it has also search field here on the top. And it has equations and context specified. So that's, that's the usual thing that text and equations are possible. And you can see here, there's a so-called symbol palette. So latex as the pros, most of you know, it's not always that you have every equation in mind in latex. So this symbol palette helps to gather the inputs and define the input in latex. So it's a visual representation of the input directly. And it's also, you see categorized. That's one improvement, I would say, for a massage engine in the front end. So another one, this is from math deck. Actually, this is not completely a massage engine, but this is a very, very interesting feature. I think it was asked to me before also, if it's possible to get from an equation, for example, from that equation about a sphere to get the textual representation. And this is here done with so-called wiki cards. So you have wiki data, for example, and you're fetching the label from wiki data entries to get that this is, that the context of this formula is, for example, sphere. This can be very, very helpful for accessibility features. For example, if you have speech to text, and then you say, okay, I want to get a formula for a sphere, and then you can fetch that. Or I want to get a textual description for the formula just typed in. So this indexing wiki data here, it's quite interesting, future technologies also for math search. Another one would be, that might sound trivial, but it's not, especially during licensing. You see here, as for example, approach zero is that math search engine called, it's also available on this. And you're giving it an equation, and in the results, it's already highlighting the results. That's very, very beneficial to find the content, and because pages can have lots of content sometimes. And this is, in some math searches, it's not, but I think it's not because of the technical possibility, but more the legal possibility to show preview of index context. Yeah, we have also here CBMath open combined search, text and formula search, for example. And these are just some ideas. We might spin the future. We have a project here, a math portal, and we want to gather features. So federal searches of a multitude of data storages would be another idea. And then we have a formula index, a free key database syntax verification of the latex, for example, more accessibility, improving a simple palette, everything possible. Semantic concept notation, that's the one mentioned with wiki data. So that's just a collection of ideas. And I want just to start, maybe a short discussion. Okay, with the time still. Okay. Yeah, so for us, it's very, it would be great to know since most of you, I think, are from a math background of the mathematicians. And we are mostly from the Mali portal, we are from IT, IT people, so from informatics. And the users in the end will be people, mathematicians, mathematicians, researchers, and so on. So for us, very interesting, who are the users of math search? Has anybody used math search engine? Or are currently used them? Or that's an interesting scenario for these, for using math search, just a field of mathematics. So, would you raise the question? People are not using math search so much. Have you known before that math search? Okay. Okay, yeah. I think then, yeah. I would just, yeah, just say a little question. I mean, I'm not a mathematician, but once I have an equation, maybe more complicated than you show, or maybe a system as a partial differential system, is it also planned to find software about that? When you are interested in solving these equations? So you mean software, which is somehow containing math equations? Yeah, containing solving these equations, yeah. Is this older, and then in the plan for a full math search, or is it just documents, like PDF or papers? It could be arbitrary data, I would say. So every type of data, which is somehow coming as a collection, could be indexed with math search system. There's also service by the FESET and CAHLSWUR, which is called SWMARF, for example, they're indexing really software. Okay, thank you. So, solutions with the software approach for that, that probably would be that risk. Yeah. I was wondering about ambiguities in things like variable names. I mean, if you have the same expression written with different symbols, it should be found about simple algebraic manipulations that are factoring out, collecting. Is this accounted for? So if I write down an expression and somewhere in the paper, it is the same expression, but in a different notation, will it find that? Yes, later transformation. For example, usually you're using, as an input form, you're using latex. So this has various expressions for the same thing. So these transformations are usually working quite well. But then there's constant math equation transformations, which are quite complex. So simple substitutions of variables, they're practically, they can be recognized. If you, for example, using another variable name or something like that. But if they're very complex, that's still a thing which is ongoing. So if there would be a very complex transformation or transformation rule for equation, this is not always possible to find them the similar, the most similar formula. But that's complexity that's already in these systems. Simple substitution is possible. I actually remember exam at university where I solved an equation just by looking up in the formula book, the one had the same letters. Okay, yeah, and then there's another transformation. I mean, what would be the usefulness actually searching for mathematical expressions? There's a syntax and the semantics and they're not necessarily easy to capture, right? In an index. Yeah, so what would be the use case of a mathematical math search engine? Is that a question? Yes, I mean, you might have the same equation, meaning different things in different contexts, right? I mean, the math is limited, but... I mean, one use case would be you have an equation and you don't know like what the context is. So you're fetching the context to that equation. But where would you get that equation from context three? I mean, you don't do research in a vacuum, right? You don't have an equation that falls out of the sky and then you want to find some context for that equation, right? You have modeling assumptions and model assumptions. And then some theory that's existing and that you extend, but I don't see a use case for a search engine, for a mathematical expression, really. So yeah, you have a mathematical expression and maybe you just have to expression and want to search for papers, which are also containing expressions to just say, for example, for example, you as a mathematician, you probably change the equation. And then you come to a new equation. And then at that point, you want to search the equation, that if does this equation already exist in scientific publications? So that would be a use case probably. But I mean, there are typically labels attached to the equations, right? I mean, you might encounter, I don't know, Schrodinger type equation or Fokker-Planck type equation. And then you know the context of this partial type of partial differential equation and you know that there's a body of literature on how to solve them, right? Which methods exist for solving them and so on. So I don't see where the symbolic representation really helps to, of the, really the equation itself. I don't, I can't think of an equation that doesn't have a context or label already attached to it and then searching for the label is more probably more likely to produce relevant. It might be easier to type in the equation than in some cases. If you are very specific, well, probably there's a variation if you have like one type of equations. And then there's a variation of that equation probably. So, which is not explicitly hasn't a label. And that, that might be use case. Okay, I'd be happy to actually look deeper in this and then be convinced of really, yeah. Okay. Thanks for question. I would like to ask you for a special example if you search for the formula a times B, what will you get will you get all the papers with a times P, or all the papers with X times y, or M times and or what will you get. You will get the papers first with a times B and then you will probably as lower rank results you will get also variations substitutions of the variables. And will you also get the papers with B times a. Yeah. This would work. Yeah. Okay. You would get first explicit notation you have because it's the most similar word. I'm next. Okay. What about machine learning to, to maybe find equations that use different notations. Good one. Let me skim through that there is a system which is currently worked on. I have to see in the work math search engines and explicitly mentioned that because it's an embedding. So it's stores. And then we can form a, as an embedding and it currently works in linear combination with traditional search engines, so to say, and it really improves in the results. Although embedded. So, it's also an ongoing thing in research is called here tangent. And that's also what are variations currently evaluated in the IQ. To math task. Okay. So we all have learned that if you write a mathematical paper in the abstract we should avoid formula, because of searchability issues. Sometimes and I remember such situations. One needs to put a formula in the abstract. I think most particularly in cases where it, the word or the name or the label for something is still missing. But anyhow, I would think if I would then search for something, I would search for a combination of a text and formula. So for example, nonlinear shredding an equation with nonlinearity of the form and then comes a formula. And we'll do that. Yeah, that's possible. That's quite standard in each of the master engine that just show you the table again. So, you have here the master. So, here they all have the concurrent keyword and formula crisis here called so they have all in. It's all mentioned here. So they have that feature, so to say. So there can also be brilliant operators in between like the keyword and the math expression and so on so that's possible. Maybe maybe a silly question but I'll ask it anyway. Sometimes you have papers that you may be trying to find an equation for, for example, from my field fiber orientation. And sometimes in papers, there is an error in the equation. So if you, is there some, some way to say type in the equation as, as you find it with the error and will it give you the search results to the closest match, or will it just give you nothing. So, this is a big error than probably not if there's not a, but it's touched by similarity it's not only exact result but it could also be very, very similar results so it's possible that it finds equation and errors then even by the search results if they're correct. Okay, thanks. I think we've reached the end of time. Thank you very much. And let's thank the speaker again.
|
The number of scientific publications containing mathematical expressions is immense and constantly growing. For example, zbMATH Open, the world's longest-running abstracting and review service for mathematical content, indexes over 160 million formulas. Mathematical formula search is a core technology for finding scientific documents where formulas are defined as input. Since the introduction of many mathematical search systems in the NTCIR Math-Task series from 2013, there have been further advances and implementations of formula search. In the first 15 minutes of our talk, we want to provide an overview of current methods of formula search and related applications. According to the FAIR principles, we will emphasize aspects of reusability and accessibility here. Then, for the next 10 minutes, we intend to reach out to the audience and have a lively discussion on the planned efforts and experiences of the community with formula search.
|
10.5446/57501 (DOI)
|
you you you you I target the remember the watch the watch and what people set positive, which is a discrete time axis, a set of positive integers. If w being a signal space, say think about the set of inputs and outputs. And w to the power of the integers, a set of all time series you can create in that space. And the behavior, this kind of graphic B, is just the subset of all trajectories in that set of time series. And that's the definition of a discrete time dynamic system. Very abstract, but believe it or not, despite this being so abstract, people can do some useful things with it, or at least they can do some math with it. We don't need the full generality. We will work with linear and time invariant systems. So dynamic system and this behavior context have to be linear. If the behavior, the set of all trajectories, is a subspace of the signal space, that makes sense. And it's time variant, if the behavior is invariant under some sort of push forward operator, sigma, which is the shift operator takes a time series, wt to wt plus one. So in brief, what is the linear dynamic system? It's just a shift invariant subspace of trajectories. Graphically, this year will be a dynamic system relating inputs U and outputs Y, a linear system, just a subspace. And that very much resonates with how computer science people in machine learning would see dynamic systems. They wouldn't think about vector fields. They wouldn't think about causality. They would just take the data and try to find some low dimensional features in the data, such as a subspace. So very much resonates with the way they think about it. So you don't need to keep all that notation memory. Just remember the following, and linear time invariant system, LTI, it's a shift invariant subspace of trajectory space. And that's what we work with. Now the substructures are useful because it's model free. You don't need any models to be fine. And maybe there's a way how you can work with that without ever writing down a model. And that there is, and we'll use matrix time series, which are very popular in subspace methods, say in system ID or signal processing. It goes as follows. So you have a black box. You know the black box is LTI. You feed in some input to you, we get some outputs Y, disk return. If it's LTI, you know that these inputs and outputs must satisfy some sort of relationship. There's many ways you can write down. One normal form in time series analysis, what is called an ARX, auto regressive representation, exorbitant inputs. That is there are constant coefficients, B0, A1 and so on, B0, B1 and so on, that relate the signals U, Y and the time shifts. Standard modeling for discrete time linear system. In behavioral system theory, it calls a kernel representation. So let me tell you where the kernel comes from. If you only have data you just post, what you could do is you could just run experiments and put all the experiments to matrix. The way you should read this trajectory matrix H is the first column is the first experiment, second column is the second experiment and so on. And every column is sorted according to time. Every column is one experiment. As you can see, if you were to have left multiplied this matrix with the vector of coefficients B0, A0, B1, A1 and so on, this would be zero. So the vector of coefficients of your ground truth model would be in the left null space of this trajectory matrix. Of course, it would be very tempting to understand, well, does this implication also go the other way around? But as if you have data, you can uniquely convert back to an LTI model. And this turns out to be true under assumptions. Okay, you cannot always include this. Obviously, all of this only works for clean data. So no noise on it. We'll later think about how to robustify it. But let's first understand if this implication goes both ways. And that's what became known as a result called the fundamental lemma by Jan Willems that he conceived in 2005. We recently made some upgrades to that. The stories as follows, you have a black box, LTI. We give it an input U, I, in RM and Y, I, in RP. And you also indicates yourself some complexity specifications that a black box cannot be say infinite dimensional whatsoever. You assume it has a finite state order N and a finite lag L. That is the lag often L is equal to N, but then always if you have multiple outputs, the lag is how many steps in time do you have to go backwards to recover the hidden initial condition? Okay, for many outputs, it's less than N actually. And now we take this behavioral perspective and work with the set of all trajectories. If you had a model, you could parameterize the set as here. So the set of all length T trajectories would be the set of all U in RM times T, Y in RP times T. So that they exist a state X and everything satisfies a linear state this model. So there's some ABCD matrices that relate inputs, outputs and state. You could write this down and parameterize the set if you had a model. Now it's very tempting to contract your following. This set of all trajectories is just the linear combinations of all the experiments you've seen. It's the color of space for data matrix. Think about in the context of robotics, say, if you know how to lift the arm, how to turn the arm sideways, see your linear combination and you can get it up there. Okay, so maybe you can span the set of all trajectories by linearly combining existing ones. That would be very nice, right? It turns out this is actually true under an assumption. And that assumption being that this trajectory matrix has low rank, which corresponds to the finite complexity of the LTR system. Particularly must have rank M times T, so number of inputs times the length of the time series that you want to parameterize, plus N, which is the number of hidden states. And that has to hold for all T sufficient large, large in the lag of the system. Okay, so if there's, you know, you can forget everything about this talk. Maybe that's the only result. Maybe you should have a memory, hopefully, whoever. Instead of working with models, at least in the linear context, you could just work with raw time series. And there's a one-to-one correspondence, right? So just a set of all trajectories is parameterized by a linear combination of experiments, provided that this matrix has particular rank. So in words, all trajectories you ever want to construct, you can construct from finitely many past trajectories. And of course, you can also use many other data structures. You don't have to pile them up all up column by column. What people often do, they use what are called Henkel matrices, that is your condition, essential data on shift and variance and so on. That's not so relevant today. What's the novelty here? Why did Jan Willems call it the fundamental lemma? It turns out people in all sorts of disciplines have more or less been relying on that equivalence without ever formalizing it. I told you in robotics, this is called motion primitives, right? And you can linearly combine them to get a new one. Essentially, it's combining basis function if you want to. Fluids, they call this DMD, dynamic mode decomposition. And in many other disciplines, they sort of made these assumptions. You can just construct basis functions for the behavior of an input output system. And that's why he called it the fundamental lemma. Our contribution was essentially making his sufficient result if and only if we're moving some assumption, but it's really his result. I don't want to take credit for that. And there's a lot of blooming literature. And there's about two to three, sometimes more archive papers per week popping up on this problem nowadays. This result more or less got forgotten for 15 years until 2020. We dug it up again and then people realized, oh, that's so cute and nice. And now everybody jumps and see, what can we do with this equivalence that we can work without models? Let's first see what we can do with that. Namely, the way we use it will be for model predictive control. And so the acronym is MPC. And let me tell you what this is all about. Again, on the upper right, I have for you the block diagram just to have the notation, because not all of here from control. So I thought it's useful to review the notation. So we have an ABCD, state space model, state X and put you output Y. And MPC is essentially a brute force computational control method. No scientific elegance whatsoever, but it works like hell. And what is all about? It says the control algorithm just solves an optimization problem. We get new measurements, we solve the problem again, get new measurements over the gains. We try to solve the message problem as fast as we can in closed loop. So what's this optimization problem? Essentially it's a discrete time variational problem. It says we want to minimize some sort of L2 norm of inputs and outputs, where the output Y is expected to track a reference R, that we give it, right? And input U should be minimal, so you want to have an economic controller. And we optimize over multiple time step from now on to some time T future. And of course, if you want to predict the future, you need to have a model. So as a constraint and optimization, we have the model that just iterated forward. So the K runs from one to the future to predict forward. And of course you can impose input and output constraints on the way. The catch about this formulation, even though you find that many papers is, again, nobody has the state, the X is typically not known. So you also need to build in some sort of estimation. And one way to estimate the state is again, just take your model and just write again the model equation. And the only difference now is that this index K runs backward in time. So now it runs from zero, minus one, and so on to minus some time T e need. And that allows us to uniquely figure out the initial condition, just propagating model backwards in time. So we resolve an optimal control problem. We simulate forward, we simulate backward and the constraints to get the initial condition in the future. And this looks very brute force, and it is, but people worked out all sorts of system theory, like you can actually prove close loop stability, robustness and all these things. So lots of clever mathematicians have worked on that. And if you have a deterministic linear time invariant model or system, and even exact model of that system, this is sort of the gold standard. This is what essentially go into industry, the 10% of problems that cannot solve for PID, they're solved with that method. And it has all the things you want, optimal, safe, robust, and so on. The catch is you need deterministic, linear, time invariant, and a model, these four things. And now I want to get rid of all four of them. Let's first get rid of the model. And I already showed you how to get rid of the model. And instead of having the X, U, and Y satisfying the model equations, we can just say they have to live in the image of that trajectory matrix. And so here's the data-driven version. The problem is very much the same as you've seen on the previous slide. I only replaced the model constraint by saying, a trajectory has to live in that image of that trajectory matrix H. So there exists a vector G, so that H times G is the trajectory. What is this desired trajectory? It consists of two parts. The U and Y are the parts you want to create forward. That's what we optimize over. And then it also has the U, E, and Y, E, which are just the recent most measurements which need to initialize the trajectory. So the lines corresponding to U, E, and Y, E is essentially running the model back. That's the time to figure out the initial condition. And it's literally used to the fact any trajectory has to live in the image of that trajectory matrix. And so there's a vector G, so that this is true. And the way you should read this, this U, E, and Y, E updated in real time. So every time we solve that problem, we get new measurements, where the trajectory matrix is fixed. We have historical data to do that. You could think about adapting this online, but we will not do this today. And it's not hard to show that this is perfectly equivalent to the model-based predictive control in the deterministic LTI case. But that's just a pen and paper result. Turns out it doesn't work if you throw it on a real experimental system. And let me quickly illustrate to you why. It's because think about that trajectory matrix is your motion primitives. It's your basis function, right? What the SAS with the vector G linearly combine the old basis function to synthesize a new optimal control. But if your old basis functions were noisy, then it wouldn't make sense, right? Likewise, if the system was non-linear, then taking linear combinations of all trajectories would not be compatible with the dynamics either, right? So you need to somehow robustify this thing to account for both noise and non-linearity. I would have had a video, but it only plays on my computer. Let me tell you what the video is about. So we were throwing this in some master projects on some robotics platform, like a quadcopter. And we had first flying it around by hand. And then by flying around by hand, we were building this H matrix of trajectories. And then it could fly by itself and do all sorts of big A's, loops, and so on. So it just to convince you that it works. But now you have to believe me without the video. I hope you do. More interestingly for this audience is how do we make it work? Because as I said, this here is more of a theoretic result. Yes, you can parameterize the predictor, the set of trajectories using a trajectory matrix. But it's not robust, so how do you make it work? So there's two things you need to account for, at least the first glance. One is noisy real-time measurements. That is this y-ini, the measurement you collect, is typically noisy. That is, this equation will not be satisfied. It will not be feasible. The y-ini and the other variables you and y will not live in the image of your trajectory matrix if that was cleaned data. That's quite common in estimation, because you never solve estimation problems. So putting an equality always solves in a least square fashion, like comma filtering or like. So how do we put in here some sort of least square estimation? Well, we can just add a slack variable, sigma any, to that optimization problem, and penalize the slack. So finding essentially a least square solution to that estimation equation. And it turns out if you put the multiplier, lambda y in the objective, in front of a slack variable, sufficiently large, then this will do the right thing. In terms of that sigma any will take the value 0 if the equation is feasible, and otherwise will not. Let's just run some Monte Carlo simulation on our quadcopter. You can see when you increase the multiplier, lambda y in front of the slack variable for the estimation, the cost will go down. So that's the realized cost over closed loop experiments. And also, the number of times you violate any constraints will go down until they both hit 0 to a numerical precision. So that's quite nice. So it works very well. So we do essentially least square estimation here. But that's just part of the story. What you should really worry about is that the data that you collected offline, the trajectory matrix itself, is very likely noisy, scrubbed up by noise. You will never collect clean data unless you have a perfect simulator, which is not the case here. So how do you robustify yourself to this? What's the decisions called? Error and variables are a multiplicative noise setting. So the H is no noisy. And there's many things you can try and do. Turns out, few of them work, or we got few of them to work. What worked in the end was the following. In the clean data cases, matrix H would have a large null space. And we should select a particular solution. And we selected a sparse solution. So we penalized the G to have small one norm. But as we look for a sparse G. So why does this make sense? Let me give you first intuition before I present you the math on the next slide. The intuition is this trajectory matrix consists of your basis function of emotion primitives. Every column is one trajectory. And you know that matrix is of low rank. And so you can synthesize all the new optimal trajectories by taking a linear combination of a small number of trajectories because the matrix is low rank. So this L1 sort of promotes sparsity of G and does some sort of surrogate for low rank of this H matrix. You construct a low order basis for your future behavior. I'll make this precise on the next slide. Just some simulations. As you play with that multiplier L1G in front of the one norm objective, you can see the cost nicely goes down. And also constraint violations nicely go down to zero until you over regularize. That is when you make the L1G too large, everything will go up again. So you need to carefully tune this, which obviously is one of the bottlenecks of any data driven method tuning of regularizers. Let me give you some math background. Y is taking a sparse basis of trajectories, the right thing to do. And I'll motivate it by means of relaxations of bi-level optimization problems. Namely, we can think about the following two-step problem. The other problem is just the one that we've seen, optimal control. Minimizing control costs, subject to, U and Y, live in the image of that trajectory matrix. And the trajectory matrix depends on the data you've seen, and that data is noisy. So you want to pre-process that data. The pre-processing is actually in a problem, the lower lines. So U hat and Y hat are essentially approximations of the real data. Namely, you want to find the best approximation so that this matrix is of low rank. I think it must have ranked M times L plus N, where L is your prediction horizon and estimation horizon, so that T e plus the future is in the old rotation. But the quintessence is that this matrix should have low rank. And so you should do some pre-processing in form of a low rank approximation. OK. So this is the problem you would like to solve. That problem is typically hard. If you put structural constraints in that, the lower rank approximation is empty hard. In general, you cannot just do things like SVD or something. So we need to simplify it. And let me now present you a sequence of simplifications, formally relaxations that is only enlarged, the feasible set, and make the problem better. The first one is to say, well, since this matrix is of low rank, I can put a sparsity constraint on T. So since that matrix ranked ML plus N, then without loss of generality, I can add to that problem a T, which has zero norm or cardinality being less than ML plus N. I can just do that. It will not change any optimality. What I do next is I don't like the rank constraint. It's heavily non-comax. So let me just drop it. Had it off. I don't like that. If I drop the rank constraint, but then I can solve the lower little problem exactly. Then the minimum is just u hat is equal to u d. y hat is equal to y d. Done. So plug it in here. Next sparsity constraints also empty hard. G0 is not nice. The convex relaxation of a zero norm, if you know, compressive sensing, is a one norm. I'm going to replace it by a one norm. And next, I don't like the one on the constraint. Let me lift it to the objective. And while I get the previous formulation. So the summary of all these steps is that you can think about this L1 norm regularization as some sort of relaxation of the quite hard non-comax preprocessing problem using system modification. Essentially, it's movements for you the model order selection. You have to find the model particular order. You can now go through similar other results that say, what if that inner problem was not a low rank approximation, but say at least square problem or whatsoever. So for instance, the least square problem would be, say that y has a linear relation to u and the past data. So there's a matrix k. So the y is k times u, and you y and u. And you find that matrix k from data using least squares. And you see now the least square solution will be of the form of pseudomverse, meaning some matrix will be of minimum norm. And it turns out going through a similar procedure for relaxation, you will then find out you don't have to regularize with the one norm here, but with essentially say any chi that lives in the kernel of this trajectory matrix. It would then be the regularizer. So you can now think about various combination optimization problems and how they can lead to regularizers. It turns out going from least squares to that particular regularizer in the lower right. So the regularizer that analyzes chi in the kernel of the trajectory matrix is very nice. Because unlike other regularizers, we'll not have this convex valley. We need to carefully fine tune what's going on. Which region does it work, which region does it not work. The red curve just says, make the coefficient in front of the regularizer sufficient large, and you always get the best performance. So the tuning effort is minimal. Just make the coefficient lambda in front of the regularizer large. And you can derive various other regularizers this way. But that's sort of the sense just relaxing in the problems. Now all of this was linear, right? Everything was conditioned on something being a subspace, taking linear combinations, some matrix being of low rank and so on. But in any of the problems they face in reality, they're not linear typically. So how do we go about nonlinear systems? There's many things you can do. Let's for now take the simplest approach and say, there is no nonlinear system. And that's actually an accurate statement. Without loss of generality, the world is linear. And that's true, at least if you allow yourself to lift the problem to infinite dimensions. There are various techniques. I'm sure you're familiar with some of them, like Kupman, Kaleman, Lotka, Oterra, Kupman, Stumbi, Leville, and so on, that allow you, under certain assumption, to lift a nonlinear vector field to be required in electricity to an infinite dimensional linear or bi-linear vector field. And if you're lucky, that lifting is actually finite dimensional and linear. But there will always be an infinite dimensional lifting. On a more pragmatic engineering side, you could just say, well, all I want to find is trajectories and find a horizon. And I can approximate those by a high-dimensional linear model. So let's just go their path. But every model is linear, at least in high dimensions. And let the regularization figure out, in this high-dimensional space, what are the real-life features? And it turns out this works consistently across all applications. So we have an army of mass students. We put it in all sorts of problems, like drives, power converters, quadcopters, swinging up a pendulum. We even put it here on a 12-phond or 1-phon. And it would automatically dig mud fully data-driven without you ever telling what the model is. I guess it's quite hard to tell an excavator what's the model for digging mud. But it would do that model-free. So it works in all these systems. And as you can see, many of them are heavily non-linear. If you were to punch down the equation, there would be lots of rotation matrix seasons on. And we should really understand why. Why does this work so well? It consists of cross-case studies. OK. And I have not fully understood yet why. But let me tell you what I understood so far. And again, it's these regularizations that make it work. And the main abstraction we found to explain this was something called distributional robustness, which is related to optimal transport theory. So Montch, Kantorowicz, stuff, some of you know that. And it goes as follows. On an abstract level, we want to solve an optimization problem. Where x is the position where everything but x being a g. In some feasible space, calligraphic x. And the optimization problem depends on samples. XI hat. XI hat is measure data. And if you were now to solve that problem using the samples and then implement in a real system, you will suffer what is called an out-of-sample error. Because you've just seen the samples. And now you applied the real system, which may not follow the samples. You'll probably be very disappointed that it doesn't work. So you need to robustify yourself to not knowing the real stochastic process from which the samples were sampled from. OK, so much about the wording. And so you write down a robust formulation, like a min-max problem. We now maximize our oxide. In words, what does this maximization account for? You maximize of all possible stochastic processes, linear, non-linear, Gaussian, non-Gaussian, stationary, non-station, whatever, that could have generated data. I wrote this in a very stylistic form. How do we really punch this down into math? We write down an inf-soup problem. With the expectation now of the problem, we have an expected value cost. It's an unknown probability distribution, Q. Q is the distribution that generates a sample, the real stochastic process. We don't know the Q. We don't know the Q. Must be nearby the samples. So we take a maximum. So we maximize overall Q, which epsilon close to the sample distribution P hat. So you maximize of all probability distributions that is over all stochastic processes that could have generated the data which are nearby the samples. And what do we mean by nearby? There's a ball of radius epsilon around the samples in a particular metric. And that metric is the Wasserstein, or optimal transport, Katorovich metric, which gives a distance in the space of probability distributions. Maybe look at the figure first. So there's two probability distributions, P and P hat. And I want to figure out some notion of distance between them. So I can just construct, for instance, the product distribution pi, which would have, if you projected on either of the axes, the right marginals, P and P hat. And then I just measure the distance by how much work does it take to shuffle one distribution to another. In computer science, they call this the Earth Mover distance, because literally that's what it is. So you take the difference of psi minus psi hat under the integral, and then you take the expected value. You really measure how much does it take to shuffle one distribution to another. Now, there are many distributions, pi, we could have the right marginals. And so you take an infimum of all distributions, which have the right marginals. OK, so that's the notion of distance, and that distance has to be less than epsilon. So this is a notion of a Wasserstein ball. And so, yeah, you can write down this infs soup problem. It turns out to be doubly sem-infinite. Yeah, it's one of two problems you can write down, but then what you do with it. It turns out, actually, you can solve this one in closed form or resolve it to a convex finite dimensional formulation in closed form. And it goes as follows. This infs soup problem, after going through 20 pages of complex convex conjugates, reduces to a finite dimensional problem. That is, it's a form, your old problem that you start with, this C of psi hat and x, which is just a sample average problem, just taking the data at face value, plus a regularization. The constant you put in front of the regularization is epsilon, which is your desired robustness radius, times elliptic constant cost. And then what you regularize with is the dual norm. So the dual norm of the P norm that you use to construct that distance. So you see there's a P norm in the distance, and I need to regularize with dual norm. For instance, if you now want to measure distance between trajectories in the infinity norm, so how close are trajectories in the maximum? You need to regularize with the one norm. And that explains why L1 regularization gives you robustness in this L infinity norm sense with respect to all stochastic processes, also nonlinear ones that could have generated data, which is, it doesn't say it works for nonlinear systems, but it says you're robust to all epsilon nearby in nonlinear systems in this way, where nearby is measured in this particular metric. Some further results I don't want to go through in detail. You can characterize that epsilon. You can do some non-stochastic statistics. You characterize it in terms of samples you have, averaging data. Everything can be applied to probabilistic constraints, and then answer there's lots of things that follow. I would have one more bullet item, but since you're going to stand up, it's probably a good time for me to finish. How much more time do I have? Shall we finish? One minute. One minute? And let me just put here this one. So how does this method compare to just identifying a system and then doing control? Turns out in all our experimental and numerical case studies, it always beats systemification. Actually, it's always better not to identify one. And I would have a few more slides to tell you a little bit why this is the case. But I'll leave this as a spoiler so you have to read the papers. Let's close here. Thank you very much for this very exciting talk. Questions, please. Yeah. Frank or? No, wait, wait, wait. Could you quickly tell us why this is the case? Yes. Very quickly, when you do systemification, you need to select a model class. And you don't know the model class upfront. So what you have to do, you project the data on a model class, which itself is uncertain. And if you pick the wrong model class, then you'll always be fully off. So model identification works if you are prior to the model class. But you never know. So you don't know what the state dimension is. Linears are not linear. And so that's why the direct approach will win. If you know the model class exactly, then the indirect approach will win. You have shown us several optimization problems. Can you show the existence of minimizers for these problems? Yes. For this one, there actually always exists a minimizer. So the existence is guaranteed. This is a linear problem. Or is it a nonlinear one? No, it's linear, but infinite dimensional, turns out. You optimize our probability measures. And it's linear in the space of probability measures. And for this particular norm, existence is guaranteed. Questions just how do you calculate it? And under some assumptions, like the cost function being Lipschitz, you can derive this reformulation here. And that's not just the convex problem that you can solve. If your cost function was not Lipschitz, you have no idea how to calculate it. OK, thank you. Those are questions you have. Thank you. I was wondering, I mean, the Wasserstein distance is just one of many possible distances between probability measures. Is this somehow by surprise it works with Wasserstein? Or does it work with other, let's say, weak distance? I think that's a good question. Many different notions of distance, right? We tried it out, but let me say for the others, we were not able to find a tractable reformulation of the problem unless you put strong assumptions on the problem such as everything is Gaussian. But if you're really in the model, in the distribution-free case, we have no idea what the distribution is, then this is the only reformulation I know that is tractable. Yes, and I should say the math for all of this Wasserstein optimization has really only been firmly developed in the last five years. These are very recent results coming out of the operations research community. And I know they're also looking at other distributions that are vergences, but these are the strongest results. This was also my question. Is this Wasserstein includes also non-Groschen, maybe Levy, or whatever? It includes everything which has a finite mean. Ah, OK. So essentially integrals need to exist. If you don't have finite mean, then it's not in the Wasserstein ball. OK. OK. Yeah. As I'm coming from the non-controls, I have no idea about controls. So how easy is it to implement? So we have a complex system of dealing with induction, welding of composite materials, which we have used PID control and then decided to maybe work with colleagues in the electrical engineering department to develop model predictive control method, which was also because of the complexity of the system. Like you said, pretty much impossible to work out the model for it. How easy is it then? So this problem I can solve on my laptop. I mean, it's just a convex optimization problem. I can solve it like in a few milliseconds on my computer. If you want to go to industrial application, the crux is how fast can you solve such optimization problems? Because the nice thing about MPC problems is that they are structured. They have this recursiveness of the ABCD model that comes in with the H matrix, which is just a full blown dense matrix. It has no sparsity constraints. And so solving the optimization problem in real time, for instance, 100 millisecond sampling rate, this is normally a challenge. But for this, you just call an optimization solver solve it and would take 100 milliseconds on my laptop. But if you want to now put it on a microcontroller without a 12 bit or so, then this is the crux. So it's really a computational effort. So it's probably not suited for each and every application. You want to at least have a powerful computer and some sort of optimization license. OK, I think we should go ahead. Thank you very much again. If you have further questions, we'll be in a minute.
|
First 2:38 minutes of video does not have audio.
|
10.5446/57502 (DOI)
|
Thank you for the introduction. My name is Benedikt Gjarnel. I work at the Weierstraße Institute. It's a pleasure to be here and present some joint work within my group, DICOMnet. It's a small group at Vias in Berlin and we are all probabilists and we use stochastic geometry to model and analyze telecommunication networks. So I also realized that this is a diverse audience so I tried to essentially do my talk in three parts. Some motivation, some general ideas, but I don't want to spare you from some theorems at the end of my talk. So you have an idea of what we're actually doing every day. So the motivation, as I said, comes from telecommunication systems. I guess I don't have to emphasize this too much. There is a strong increasing demand for communications on all different levels. This is just some slides that I took from some picture that I took from a big telecommunication systems provider, Cisco, and they make predictions in their white papers and all these, they indicate that we have an explosion of data transmissions, an explosion of subscribers. There's the large field of Internet of Things where small units communicate with each other, machine to machine is a bus word in the field. So we have this just this tremendous increase in mobile communication, but also in communication of small devices like sensors. Exponential growth essentially is what we see right now and in the near future. And I want to address a very particular part of this development and how operators try to cope with these challenges. And the part that I want to speak about today are spatial networks. So not social networks in the Internet. These are spatial networks. And in particular, an aspect of spatial networks that is also already to a small extent implemented in the current 5G mobile standard, but it will be playing a more stronger role. Many people predict that it will be playing a stronger role even in the future. And this aspect is device to device communications. So as these simulated pictures should indicate, we have maybe an environment, some kind of edge system here, some kind of urban topology maybe, and we have devices that sit on in this environment. You're indicated by blue particles, blue points, and they're supposed to communicate in a peer-to-peer fashion directly with each other. And this should bring some key benefits to the systems that are listed here. It provides more robustness to the system because there's multiple paths that data can be transmitted between components of the networks. It can come with much lower costs. We can get rid of heavy infrastructure, billions worth of infrastructure, and our current days mobile phones have a lot of power themselves, for instance. Security issues come decentralized. This is a benefit, but this can also be a challenge. And we can get rid of a lot of latency if you think of peer-to-peer communications. We don't have to go to the backbone of the system. We don't have to go through base stations. The devices can communicate in a peer-to-peer fashion, makes it faster and more robust. Many more aspects that I have listed here, and of course challenges. One of the main challenges is that from an operator's perspective, you cannot control the system. It's an ATOX system. So the network components building this network, and you have to deal with the fact that this is somehow not controlled by some operator. It's actually hard to read from here. Applications. I think I've already addressed some of them. It already implemented our main sensor networks, where, for instance, in large solar fields, the different sensors communicate with each other. It makes no sense to build up some centralized base station system. The sensors communicate. They accumulate the data, and they bring it to some exit point. So you see this implemented already in a few situations in the real world, but we hope and we try to support by our research more applications in the new future. For instance, by creating networks in areas of the world where there is no infrastructure available yet. So our a priori belief is that we can only understand these systems theoretically by using probabilistic methods. As I try to convince you, there is an intrinsic randomness in the system, because we cannot control how the individual components behave in the system. So I want to use the lenses of a probabilist to model and analyze these systems. Randomness enters on various levels. There is an intrinsic randomness on the behavior of the components. From an operator's perspective, you have a lot of uncertainty how they behave. We have stochastic algorithms that are partially used, but can still be developed. And when we deal with randomness, then we can essentially look at the system from two sides. What is the expected behavior of the system? That's important from a design perspective. And something that is not yet developed so good, and we want to make contributions there. We do make contributions there is to also understand the system in its behavior, in its unexpected behavior, which is also critical. You want to understand situations where the system behaves really bad and understand why it behaves bad. This is typically also very hard to simulate if you have a model, because maybe these bad situations are very rare. So to make a statistics on the typical behavior within the rare event is difficult. We have some tools to tackle this problem. The undots we use is stochastic geometry. So that's maybe roughly 40, 50 years old field within spatial probability. And in essence, you can make a mapping between notions in stochastic geometry and the notions that come up when you speak with people from engineering in big telecommunication companies. So for instance, the device locations can be seen as point processes. So point processes would be the equivalent notion within the community of stochastic geometry for the locations of the devices or the components of the networks in space. For instance, we can deal with an environment. And in this picture, the environment is a street system. And this can be seen as a tessellation process in space, which is also a quite fundamental notion in stochastic geometry. Mobility is a big topic. How do the components move in space? And there's a large array of models within the probabilistic community that try to model this. For instance, the random waypoint model, which is it's different from a random walk, but it also has some similarities. There is medium axis. Basically, what I try to convince you that there is a, you can, this is really the right framework. We can find the appropriate notions within the stochastic geometry world for a variety of, for the models. What we want to do, and this is our domain, we want to make rigorous analysis. So by solid mathematical theorems and deductions. So this should enable us to make predictions for the future by understanding these models. With a particular interest in the critical behavior. So do we see maybe faith transitions? Do we see situations where this model dramatically changes its behavior when you just vary a certain parameter a little bit? So discontinued newities with respect to the parameter space. And many of these instances are also supported by simulation. So I present to you now three different applications and associated methods that we use. And the first application is connectivity in mobile urban networks. And just stick with the picture. Maybe there is, there is a city topology, maybe some street system. And we have mobile devices in blue. And we have some base, additional base station infrastructure. And now we want to augment the base stations or the system by the possibility to have intermediate peer to peer connections. So the green base stations, they come within certain interaction radius where they can directly be connected with devices. But there are lighter green extensions of these areas that by adding one, two, three additional peer to peer hops. And the base stations, they don't move in space, but the devices they move maybe along the streets, maybe away from the streets. And, and a characteristic of the network that we want to analyze, I put here in the display in mass symbols, don't worry too much. Maybe later I will be more precise with the notation. One, one important quantity is the connection time. So how much time does a typical device connect to an infrastructure using at most k hops. So if you build such a network, this is very important to understand the quality of the network, how much time do you actually spend connected to the infrastructure by at most k hops. Second, maybe how often do you have to reconnect to new base stations, all these quantities play a major role in the design of these networks. Let me also say that in many instances, our analysis will be performed in some, some kind of limiting regime. It's, it's rather hard to come up with, with vigorous results. If you fix parameters, so we want to understand essentially the edges and corners of the parameter space by, for instance, letting the number of hops become large, letting time become large. So the actual analysis is typically performed in some kind of limiting regime. Another application is data routing. So here is an, here's an, the image or the picture is as follows. One has a central base station and a number of randomly placed devices around the base station, base stations in the center. And now the idea is that every device has a certain algorithm that tells them, okay, I want to connect to the base station, the center, but I cannot go there directly. So I use I look for instance for the device that is closest to me in the direction of the base station and therefore creating a tree where messages has through via other devices towards the base station. And then for instance, the question can be how is the data, how much data does actually a point close by the, by the base station has to handle. So how much data is actually passed through nodes that are close to the base station. And also here, the system can be analyzed in its expected behavior. But what is even more interesting is to understand what are the typical configurations of the system. So for instance, a large proportion of devices cannot connect to the base station because maybe some, there's too much throughput in a certain region of space. So this is, we also do again, an analysis with respect to the expected behavior, but also the unexpected behavior. This is, this is very important. Third application is malware propagation. And here, the idea is that there is wanted data in the system, there's unwanted data. And this is a decentralized situation. So, so there is maybe a malware at the, at some typical node at the center of the network. And how does it propagate through the network? Obviously, this is, this is very much connected maybe to two models for epidemic spreading. So, so I think we make there quite a strong connection towards spatial epidemic modeling. So here you see a picture where you have susceptible devices in space, they're connected whenever they're sufficiently close to each other. And we have, we have an initial malware at the center, it's spread through the edges step by step in continuous time in this case. And, and red infected devices appear. But here comes a speciality of, of our model, which we designed together with people from, from orange, by the way, I should mention this, we have a major collaboration with, with industry. So just to make sure that this is not purely academic. So there is, there is, there is regular contracts. I think we have seven contracts already, where we analyze and simulate these systems for them. So, so there is, there is some, some grounding there. Anyways, the speciality is, is now, so how can you, how can you counteract this, this, this spreading of a malware over, of an infection through the system. And what they imagine, and already in part have, have, have prototyped is something that they call a white knight, which is, which is also a device in space. It's, it's essentially indistinguishable from, from the regular devices. And it can patch when, when it can patch an infected device whenever it is attacked. The problem here seems to be that you cannot simply patch any device because of legal reasons you have to wait until you're attacked. And only then you can patch. So this, this is, this makes the system actually quite complicated, highly non-monotonic. And what you see in the picture here is a later point in time. Initially, there is an infection at the origin. There are white knights in green in space sparsely. And then the infection grows. And whenever an infection meets a white knight, it can retaliate. And therefore, on the set of in set, it's just infected devices patch and they become green. So you see a kind of a chase escape dynamics here. The infection has a slight advantage because it can go to all the infected, the susceptible, susceptible devices, but the white knights has, they have to chase because they can only use the infected devices. So this is, this is an analysis that we do. And let me also say that, that one of the main inputs here is that we deal with a random network of positions. So many of these things have been done actually quite, quite recently on, on non-random, let's say grids or non-random topologies, geometries. In our case, we always deal with these random, random point configurations in space. So these are three applications. Now, let, let me address briefly methods that we use from stochastic geometry. One key method is called continuum percolation. This is closely linked by the way to statistical physics. And, and this is one of the, one of the key concepts that we explore and use. And it goes like this, for instance, in this picture, you have some kind of environment in green where you see, you see a certain intensity, density of points in blue, and they all come with, with a certain interaction radius that might be device dependent. So we can model this also by, by assigning individual IID areas where, where people can, where, where the devices can, can form connections. And then the question in, in continuum percolation is if you now tune the intensity of the devices, the expected number of devices you see in a unit volume, does when, if you tune this up, so you see more and more devices in expectation, is there a point where the system dramatically change from a locally, a local behavior where you see only local clusters of connected devices to a global situation where all of a sudden with a positive probability, a typical device can actually connect to infinity. For this is, for us, this is a, this is a rough estimate of a, of a breaking point of the system where it jumps from a purely local behavior to an global behavior. So, so for us, this, this is, is, is some kind of a, a first indication, how many people do you actually have to attract to such a system, have to be part of the system. So you see long range, the possibility of long range connectivity. The second method that is quite underdeveloped in the field is the, is the method of large deviations. So as I mentioned, briefly large deviations is a, is a, is a highly developed machinery methodology within Socastics in general to deal with rare events. And the rare event is, for instance, this, this random variable Xn, it wants to be, it has an expected value where, where, where it wants to be, there's always a running parameter and I will maybe later explain more. And we, we are interested in what is actually the probability that the system behaves away from what it wants to be, where, from, from its expected behavior. And in many instances, if you have enough independence in the system, this goes exponentially. And there is a certain speed, rate function I, and this rate function can be used to understand the typical behavior within the atypical event. And, and this, this can be, for instance, employed to, to, to build an important sampling algorithm. For instance, in this picture, we were interested, okay, here, here, all these, these many, many devices, they want to connect to the base session, there's interference. So the problem is, there is a cocktail party problem. If, if a person wants to speak to another person, but there are many, many people around, it makes it very hard to form this connection. And we're interested in what is the typical behavior of the system in a situation where you see a certain atypical number of this disconnected devices. And in this case, you even see a face, a symmetry breaking phase transition. So it's also from a purely mathematical point of view, an interesting theory and interesting applications for, for large deviations within the realm of space-time point processes. The third method is interacting particle systems on random graphs. This is a quite old theory where you look at Markov processes, or maybe beyond Markov processes on fixed configurations. You have a configuration of, of, of, of particles, let's say on a grid, they come in different local states. And now we, we, we devise exponential rates, interacting, depending on the behavior of a device. And one point depends on the neighbors, for instance. And then we look at the, at the long-time behavior of these Markov processes. And for instance, in the, in the case that I gave you with the white nights and the, and the infection spreading, we can, we can, we can model this in the, in the framework of interacting particle systems, but now on random graphs. And in some cases, for instance, derive face diagrams of global extinction and global survival rigorously in the framework of interacting particle systems. How much time do I have? When did I start? Oh, two minutes. Okay. So I was a bit slow, but that's okay. Just, okay, maybe I actually, two minutes, including questions. Okay. Just, just to give you a flavor, quick flavor of how we, how we deal with these things. So we start with a random process, so catholic process. Typically, if we have no statistical knowledge, it's a good idea to start with a so-called Poisson point process. This, you should essentially think of a uniform distribution of points in space. We can create a graph G, by just connecting two points that are sufficiently close to each other. And there is a famous phase transition of continuum percolation, where I see an infinite cluster emerging. And then we can look at such a Voronoi tessellation. These are all the areas of points that are connected to the infinite cluster. And on the second level, now, now introduce a time process, where we associate to every edge a certain passing time. And then we can look at the, the set of, of infected devices, when we initially have an infection at the origin, we can look at all the devices for which there is a pass, where the passing times, the sum of the passing times does not exceed a certain time t. So this is the set of infected points at time t. And this is a, this is a recent result, where we, which is called a shape theorem. So I will also move. We're interested in the limiting shape of the infected region. This is the infected region, we scale at time t, we scale by t. And what is indeed true, if you put some moment conditions and, and zero conditions here on the passing times. So that we see a limiting shape, which is in fact the ball, where the parameter phi is speed. So here's a picture. Between two time points, there is t1, you see a certain region infected and at some later time point, you see additional infections in the, in the red area. Yeah, some, some rescaling results, maybe I'll skip this. Thank you for your attention. Thank you very much for this really interesting talk questions. Let me start. You always mentioned random networks. For my little experience, I never saw in reality a random network. So you have all some structures. Do you mean Erdisch-Renie or also other kinds of random networks? Well, here the randomness, I mean, the simplest case, the randomness enters, enters via the position of the devices in space, the points. And randomness here is, is I mean, we can have a philosophical question about what is randomness, but enters from an operator's perspective, I have no control on where, where, where you move. I mean, I cannot control where you move in space. So I have maybe some knowledge that you have a typical path to your office. But, but if you look at from a global perspective, I have no clue what you're doing. Okay. Then, then this is a different perspective. I call this topology, whether it's randomness or not. And this is a bit different in your point. I think, I think it has some structures in topology. Yes. Yes. But okay, good, good. Then it's clear. Thank you. Further questions? Hi. So if you have such an attack network, how do you then solve the network routing problems? Because I guess you cannot solve them in a centralized way. Who passes messages to women on which path? Oh, we don't solve them. We, we, we make, for instance, we evaluate expected behavior. I see. Okay. So the, the routing takes a random path to the, it's a random path depending on the configuration of the networks. It's a random configuration. We can say something about the past in this configuration. And then we have a probability distribution and we can make analysis of, for instance, the expected behavior, the unexpected behavior. Okay. Thank you. Yeah. Okay. Maybe a quick question. You talked about motivating people. So how can you incentivize participation in a talk peer to peer network for mobile communication? I mean, you, you have to have your GPS on. You have to use data all the time to relay calls or data. How do you incentivize and how do you make sure people don't just make their own system? Like, you know, what do you call Uberization, I believe? This, this is actually an interesting story. One of our own older projects with, with this major telecommunication company, they're, they're a bit scared that something like Uber happens to them in the sense that you have, for instance, only in the city, you have this company and they say, well, we don't have infrastructure, but we all sign up for this contract and you can use your cell phone so you can, other people can send their messages, messages via Uber. Actually, such apps exist. For instance, Firejet played a role in the, in the protested in Hong Kong, where the government shut down the infrastructure and then via this app, you can, you can send messages. So for them, really, it's a bit of a threat that all of a sudden this company comes up and they, they basically exist via an app and they, they, they, they provide good network. So people will sign up for them and leave there. There are billions of worse of infrastructure, it's useless. So, so they're actually interested in what is the critical number of people you have to, you have to have in your business. So this actually works. You can have long range connectivity, for instance. So the attraction comes from low cost, I guess, essentially. Yeah, also a bit in the direction of the two questions. If I understand correctly, you try to avoid, of course, percolation transitions. And if you speak on good or bad messages, I mean, percolation is good if you want to have connectivity. Okay, good. And then you can give conditions for the companies whether to avoid or to, to reach. For instance, if it's a one parameter model, where the only parameter is the expected number of devices per unit square kilometer, then I can rigorously, but also by simulations, tell them there's a critical point where the system jumps from only being local to being global. Okay, okay. Okay, that's interesting. Yeah. Yep, last question. I'm wondering about bandwidth problems, like I'm really having the mobile phone users in the back of my mind. So if I order a file from a central server, then I expect the server to have a large bandwidth. But if I kind of take this route through all small devices, then maybe somebody just doesn't have a good know them and then it will kind of stop. Yeah, that's true. Okay. Cool. Then it's not a question. Was a statement or was it a question statement that I could answer? Okay, but it's not a right question. Yeah, I don't I am not at the point where I can maybe join lunch. Yeah, you should discuss. Yeah, thank you. Okay, then thank you very much. We are really in the beginning of a discussion. What is the best situation to start the break? Thank you very much. Okay.
|
Spatial device-to-device communications are expected to play a key role in future communication systems. Their often high complexity can, at least in part, be modelled with the help of a probabilistic approach, where the network components are considered to be a stochastic point process. In this talk, I will introduce some of the most important ingredients in the theory of stochastic geometry, and present examples on how we use them to study for example malware propagation in device-to-device networks or bottleneck behavior for the connectivity in such networks.
|
10.5446/57504 (DOI)
|
Thank you very much for the introduction. I apologize the long title. I didn't know how to combine these two areas. I guess most of you were asked themselves, like, why proteins now and probably don't have any connection, academic connection to proteins, but I'm sure you maybe already consume them this morning at your breakfast. With this I would like to introduce into the industrial relevance at the example of milk home organization. So milk is a multi-phase system which basically contains water, oil, and proteins. And what happens in milk home organization, you force, you apply some stress, you force it through membrane, for example, and then you create fine monodysperous emulsion. And on the right hand side I would like to show you that there's a schematic plot of the stress residence time. So you can have a long residence time of those proteins and those droplets in the system with a relatively low shear stress. You can have medium exposure to shear stress and time. And you can have long and short and high stress peaks. So the residence time, the stress rem, and the level of stress are very important factors for this kind of process. And the question is then what is the influence on drop breakup itself and break up and protein degradation. The motivation comes from the premix membrane emulsion. So you basically start with a rough premix. It's already water, oil, and proteins. You force it with a pressure gradient through a membrane to create some fine and monodysperous emulsions. And how this works is due to shear and strain stress inside the membrane. And the hypothesis is that the protein absorption affects the wetability of those membranes and which also affect then the break up mechanism. I would like to look at two different scales today, the macro and micro scale, which is the fluid dynamics, so the droplets version inside the membrane, the influence of the wetability on the droplets version mechanism inside. And on the other scale, the atomic scale with molecular dynamic simulations, the adsorption of the protein as at the fluid fluid interface. So it basically adopts to the fluid interface to reduce interfacial tension there and therefore stabilize emulsions. And also the adsorption to fluid solid interfaces as in experiments we have seen membrane clogging, so protein adoption to the membrane itself. To start, I would like to talk about the influence on the drop dispersion mechanism. And we identified the wall region as a region of high shear stress, so we would like to have a closer look at the wall interactions. This is I accidentally got to the end. Right, so the drop dispersion in forest media is a very complex phenomenon. We have multiple semi-chains break up events. We have a complex and very irregular structure. And this leads to the fact that the break up mechanism inside of those membranes is pretty much unknown. What can we do to address this? We can reduce complexity, of course. We will look at the capillary break up, so we just have a single droplet in a single pour and we can quantify therefore influencing factors on the single break up mechanism. With the numerical setup, we use oil and water, so with the according fluid properties, our simulation parameters, we vary the contact angle in our simulations. The dimensions, we have a 200 micrometer pour, which is 6 millimeters long and we push a droplet of 500 micrometers into this pour with very capillary numbers as low as 1e-3 to 1 with the according Weber-Andrejnod's numbers. The mesh, we use a hexahedral mesh size over a mesh size of 12 micrometers and we refined it down to 3 micrometers at the interface with an adaptive mesh refinement. We keep a current Fridtisch-Lewis criteria as low as 0.5 to satisfy the advection of the fluid phase within our domain and we also keep this time step as low as necessary to resolve capillary waves to account for various velocities. Further, we use open foam and we use a solver called Interflow, so it's basically a volume of fluid approach. You have a phase fraction based interface capturing. This means you have a phase fraction that denotes the fluids. For example, one here would be the fluid A, zero would be the fluid B and in between 0.5 is where our interface is located. We solved the Navistock's equation with continuity and momentum equation. We applied for surface tension a continuum surface force model from Rekkel and Korten, but recently we also implemented a sharp surface tension force as these systems are so small that they are really affected by spurious velocities which occur from discretization errors and also errors from the implementation of the surface tension force. To account for the wall interaction, we use a dynamic contact angle model which includes the function of Kistler which basically tells us that if we increase the velocity, our contact angle will also increase. We use a full three-dimensional hexahedral mesh with interface refinement around 0.5. Here I would like to show you some examples of four different simulations from hydrophilic to hydrophobic membranes. We can first divide this into five different regions. We have a leading front of the polymers droplet. We have a leading contact line and we have a wetting region and a trailing contact line with the overall trailing side. We now only concentrate on the leading front. We can see in the top right figure that the wave length of the leading front instability is dependent on the wetability. We conducted linear stability analysis for this from Lenz and Kumar to explain this. This is basically a model which takes confined liquid film trial layers and calculates the growth rate and the dominant wave number according to the layer height because we could observe that for our hydrophobic membranes, our wall layer of continuous fluid was actually thicker than for the hydrophilic ones. The results only show that the growth rate is increasing but not the wave number is increasing. The question still remains how does the wetability affect the unstable modes of the leading front instability? One explanation is we saw that the first moment the droplet enters the pore and it touches the pore is really crucial part. You might expect that if a droplet enters the membrane, it would directly attach to the membrane itself. So we have one located but actually with higher capillary numbers, it enters the pore, it necks at the entrance and the interface advex to the wall at later deeper inside the pore. You could see that this necking is also dependent on the contact angle. So if the more hydrophobic the membrane is, so the better the oil wets the membrane, the bigger the wetting. And we could see that at point two, the interfacial advection to the wall has shown some radial velocity which was also increasing with increasing wall wetting of the oil droplet. The droplet now further emerges into the pore. The continuous phase eventually gets trapped between contact lines. And what happens then is we can explain it by some effect called capillary venturi. So it's basically a venturi effect inside a capillary where capillary forces and hydrodynamic forces stay in competition to each other. Since they stay in competition to each other, they will be drawn inside, this volume will be drawn inside of the capillary to the pore center, which on the right hand side you can see that due to this entrapped phase and due to a contact line instability, this fingering instability is formed on the right lower figure. In easier words, if we see the enclosed entrapped continuous phase as a droplet, we have a contact line velocity directly at the pore and we have a mean pore velocity inside the pore. And the gradient leads to the fact that the water droplet travels inside the pore and forms this water droplets in oil. Here I would just like to give you some regimes of those fingering instabilities. On the left hand side, we can see probably the most famous fingering instability, which is the displacement of more viscous fluid by a less viscous fluid. In the middle, we can see a fingering instability that's existing because of the receding unstable interface, so contact line instability. And on the right hand side, we can see a droplet dropped on the surface with a high impact velocity and from this rim surrounding the droplet, we can see fingers emerging. This to the fluid dynamics, now I would like to continue to the atomic scale with the molecular dynamics just to keep in mind, we look at the absorption of protein at the fluid fluid interface for the reduction of interfacial tension and therefore stabilizing the droplets. On the other hand, we look at absorption to fluid solid interfaces, so the membrane clogging we observed in experiments. I would like to look at the PASA immobilization and relate the experimental literature to it. Absorb PASA to Zilliqa has shown an increased activity and also has shown a specificity and an anchor selectivity. They count some lid residues of this protein for this modulation of the activity and the specificity. What do I mean with lid residues? So the PASA is not only protein, it's also an enzyme, so it has some functional groups and it has a lid which opens the functional groups and closes the functional groups. Our numerical domain looks something like this, so we have a bulk of water, we have our protein, we have an interface which is silica or oil, no case, and we have water again, because we are dealing with periodic boundary conditions. We have to equilibrate our system and some preliminary research we have to do as rotational analysis. The PASA is quite a chunky protein and we cannot simply cannot wait for it to be attracted by the surface itself, so we have to do some preliminary research in order to account for this issue. What we do is we put the protein, place it in proximity to the interface, then we run a static MD run where we just calculate one molecular dynamic step, we save the interaction forces between the protein and the interface, then we rotate it, keeping the distance constant to the interface, run the MD run, save the interaction forces, and what we then have is basically a sampling of the surface and the interaction potential of the protein to the interface and we end up with the most attractive position of the protein relative to the interface from which we can run dynamic simulations then. Just a quick estimate for 500 nanoseconds, we need like 84 hours on a 1000 CPU cores, so we cannot really reach time scales, we can reach in fluid dynamics and this I will also address shortly. So I already talked about activation and increased activation due to adsorption, here I want to further clarify this fact on the left hand side, we can see the passive or inactive depasal conformation, I highlighted the lid residues in purple and if you compare from left to right you can see that these lid residues are in a different position, they expose on the right hand side the active center which is basically the place where the catalytic reaction happens inside the enzyme and they expose this part in the active conformation which then allows ligands actually to dock inside of this enzyme and for the enzyme to catalyze the reaction. We put our protein on surfaces, two different surfaces, in oil and on silica and what you can see here from the snapshot is basically that we have different adsorption configurations at liquid-liquid and liquid-solid interfaces, at liquid-liquid interfaces it adsorbs with the lid residues facing into the oil phase whereas on the right hand side the silica it exposes the lid residues to the water phase, I remove the water here because otherwise you wouldn't see anything, so just in case you're wondering. Excessibility of the active center and the lid residues for ligands on the silica adsorption. We look now further into the silica adsorption, we want to know what exactly happens, we could just see that it adsorbs so we can see by plotting the lid residues, each individual lid residues I plotted the distance between and on this graph we can see that at around 250 nanoseconds it starts to move and it starts to deviate from its equilibrium position which can be also confirmed by just looking at snapshots and highlighting the lid residues we can see it moves further to the active center. The question is what triggers the movement, we remember that the lid residues were physically distanced from the interface so there has to be something that triggers this movement close to the interface itself. For this we did some binding analysis and we basically took every residue of this protein and measured the distance between the interface and the residue and whenever it was in a distance where it could possibly build hydrogen bonds we marked this as binding and what you end up with is this kind of graph so you can see from this that you have constant binders which are basically constantly attached to the surface and frequent binders then would look more like this barcode structure. Interestingly we can see that the residue 522 lifts at around 250 nanoseconds from the surface which was previously a constant binder and which the result is the lid movement. We did some reference simulations for this so we just simulated the protein just in bulk water and we couldn't observe the lid's movement in these simulations. Further we did this also for the passive starting structure and there plotting the distance of the individual lid residues we couldn't see any lid movement whatsoever. So the question still remains is the protein now active or inactive? When we speak of interfacial activation or not and how can we address and determine protein activity numerically? First accessibility of the active center has to be guaranteed for a ligand actually to dock inside of this protein and second of all the ligand has to bind inside so the catalytic triad has to be still in a functional arrangement where it could possibly cut our ligand in pieces. To address the first issue we did some tunnel calculations with a tool called CAVA so what you basically do is you go from the surface of the protein and find tunnels which are connected from the outer surface to the catalytic triad and we can see on the left on the starting configuration that we have an open access to the active center and in the let's say natural case the ligands would just directly dock onto the protein and not even entering fully. On the right hand side we can see a complex tunnel network which has developed which is introduced by the bulk we confirm this in bulk simulations so the water introduces this tunnel but we can say that the tunnels are stabilized due to the energy minimization due to adsorption to this interface. Still active or inactive? Here I would like to show the catalytic triad what's the only thing you have to know is that it works together so it has to be in a functional arrangement of an acid base in a nucleophile and what happens is that a nucleophile in our case 0209 attacks our ligand with a nucleophilic attack to cut it. In our case we simulated a medium change triglyceride to address the questions if the catalytic triad is still in functional arrangement and how does it actually dock inside? This would in some give us an indication of the activity of the protein. We did this for a C8 triglyceride on the left hand side you can see that it docks into the starting configuration of the active protein and on the middle part you can see here that 0209 which is our nucleophile and our catalytic triad stabilizes this part of the ligand with hydrophobic interactions whereas for the adsorb structure we couldn't observe these interactions anymore so which would then give us an indication of the selectivity already of this protein. You varied the input structure, the docking structure and we did this with a C8, C6 and C10 chain and for the starting configuration we couldn't we could see that it was docking inside the protein but it was actually not interacting with the catalytic triad so it was just stabilizing this arrangement but not actually in a position where it could possibly catalyze the reaction. Whereas on the right hand side for the adsorb structure we could see that the C10 chain here is stabilized by our nucleophile. So in sum we have an indication of the selectivity and an ancilloselectivity and ancilloselectivity I forgot so we also put the enantiomere of this ligand inside the protein and docked it there and we also couldn't observe interactions with the adsorb structure and our ligand. Still this is a work in progress these are just some few examples which we have docked inside we are running a lot more in order to really tell what is this adsorb structure selective to and how can we determine for example which chain the experimentalists have to make longer or shorter in order for the adsorb structure to accept it or not. This I would like to come to the conclusion and to the outlook so the key findings from the computational fluid dynamics were that wetability affects the most unstable mode of the leading front instability that we have fingering in stability as a result of entrap continuous phase from molecular dynamics we could see that we have first of all different adsorption configurations of the parser liquid liquid and liquid solid interfaces. We have seen that adsorption of the parser to SU2 interfaces leads to an accessibility of the active center through a tunnel network which we could also imagine that we have an improved removal of the catalytic product through the tunnel network and our ligand binding simulations indicate different confirmations of the catalytic triad or the binding pockets. The key questions are how can we combine CFD and MD results as I said I agree pretty limited in the time space in the molecular dynamics and how can we combine these two things together we can quantify shear stress for example in computation fluid dynamics and then apply this shear stress to the molecular dynamics now that we found some stable configurations where our proteins adsorb where we can determine if it's active or inactive we can actually try now to destroy it in order to see to draw the whole picture what is the effect of forcing it with a pressure gradient through a membrane maybe we destroy the proteins the adsorpt structures maybe we don't and the last question is how does the adsorpt protein to liquid liquid liquid solid and liquid liquid solid as we have seen it could be possible that it adsorbs to to a solid liquid interface but then has still the the part exposed where it could adsorb to a fluid fluid interface so how does this influence properties like what ability in our use case this I would like to acknowledge a few people we also work for the molecular dynamics part with the hybrid materials and interfaces group of professor Colin B. Chuckley and yeah HLRN I would thank for computational resources and all of you for your attention and open for questions. Did you employ a special approaches for your protein folding also standard and the second one is have how about the force feed validation for interacting of the protein with the surfaces can you say how this was done in your force field model. So you mean validation from the interaction process is it standard. This is experimentally validated by the hybrid interface group of agreement so there is if you're interested in publication about the experimental work and the interaction forces between proteins and interfaces. In a similar direction but kind of potency use for the MD because you have different materials proteins and unorganic materials on your simulation. But for the unorganic materials as I said is done by the group by the hybrid materials and interfaces group they did some experiments and parameterized the force fields for it. And for the oil we can just use also slightly changed standard direction potentials. So for the MD part did you only look at the structural properties or did you consider something like a chart densities as well. Yes we did but I'm just starting to look into this first we were just focusing on the structures. Okay and for the interaction parts. Did you consider something like QMM approaches as well. Okay thanks. The contact and your model. Given the parameter or is it an unknown problem. So it's given parameter by the material we use but it's a dynamic in terms of velocity dependence so as the velocity increases and this is basically what the Kisla model says is that at the capillary number of one our contact angle would always increase towards 180 degrees. So what it does is it basically takes the velocity at the contact line and then calculates the capillary number there in order to change the direction of the surface force applied there to apply the contact angle. What is your boundary condition in the next two to the contact point? No slip boundary condition. Right now yes for this it was no slip but we currently implementing another slip condition in order to account for the non-integrable stress similarities arising at the contact line. And let's take you again. A USAID model gives alé...!
|
Emulsions are widely used in a variety of different industries and applications with increasing importance. Two main objectives need to be addressed in the emulsion formulation: The emulsion needs to be formed with a narrow and predefined size distribution and it needs to be stabilized to prevent coalescence and therefore facilitate handling of the emulsion. So far the breakup mechanism of droplets in Membranes is still unknown. Further the role of proteins on breakup mechanism due to their interfacial adsorption and induces change in wettability needs to be addressed. In this work Molecular Dynamic simulations were performed to give an insight on protein adsorption to oil/water and water/SIO2 interfaces to understand which role Proteins play in this context. Furthermore Computational Fluid Dynamics were conducted to clarify the breakup mechanism in capillary confinements under varying fluid properties as well as membrane wettability. The results show that the proteins not only adsorb to fluid/fluid but also fluid/solid interfaces and therefore change the properties of the membrane. From the fluid-dynamic perspective membrane wettability plays a major role on the droplet dispersion as well as in the emergence of fluid dynamic instabilities that eventually lead to breakup.
|
10.5446/57505 (DOI)
|
Okay, hello. Welcome back to the session part two of the CFD and GFD session today. The first speaker in the second part is Dirk Frériz Miu. He works at the Weierstraß Institute as well, one of my colleagues. And he will talk about DG methods to reduce spurious oscillations and a special method for that. Okay. Okay. All right. So hello and welcome everybody. I'm Dirk, as Ulrich said. And I'm a, so I want to present today some results of my PhD so far that I conduct under the supervision of Professor Volker Jons. So I present today not only my work, but also joint work together with Volker. And before we start, let me just short mention that even though that I'm in the third year of my PhD, this is my first on-site participation on conference. And I'm quite happy that this is possible and want to thank especially all the organizers for making this possible. So as Ulrich said, and also as the title suggests, I'm going to talk about a method how to reduce spurious oscillations in discontinuous Galerkin methods for convection diffusion equations. And I would say without further ado, let's write, jump into this talk by explaining or by let me explain in you the terms in the title. So let's start with the steady state convection diffusion reaction equation. So this is a basic PDE with which one can model the spread of a scalar quantity inside a flowing medium. So this can be, for instance, the charge carrier density in semiconductor devices. This can be the temperature in the, in air when the air is moving, or this can be, for instance, a chemical inside a fluid. So and as I said, it's a basic model that can be applied to many different fields, I guess, since most of you come from a more applied or physical point of view or engineering point of view, you might know it even better where it can be applied in your case. And yeah, it involves the spread of a scalar quantity inside a flowing medium, as I said, and it models three different mechanisms. Maybe let me explain this using this, this right picture here. So the first mechanism is diffusion. So we know that the, in this case, the dye in the water, and spreads in the water due to concentration differences. So we've got regions where we've got high concentrations and we've got regions with low concentration concentrations. And due to this concentration differences, the dye spreads in the water. This is one mechanism. The second mechanism is, is convection. So if I would stir the water, then the water itself is moving, right? And due to the movement of the water, also the dye in this picture gets transported in the domain. And yeah, this is called convection. So the movement of the underlying medium. And last but not least, but this is more like also a technical, technical issue. So we also can model or can have an reaction term. So if, for instance, somewhere in the domain, dye is created or deleted, this is not of particular importance for this talk, but this can be added here as well. This also fits into the framework. So this is, as I said, or this is the steady state convection reaction equation. And in most of the practical applications, the convection, the convection is much, much stronger. So orders of magnitude stronger than the diffusion. Okay. So this is, yeah, what is, yeah, like needed in all, almost all practical applications. And now to show you what happens with our numerical algorithms in this case. So this is standard benchmark problem that I want to use in this talk also later. So like, yeah, this is the only problem that we consider and forget about all the notation on top. This is just for the sake of completeness. Let me explain this using this picture. So the domain of interest. So it's a 2D model problem. So, yeah, it's a 2D domain. The domain is this rectangular box over here. So this is the domain without the circle. So the circle is not part of the domain. Okay. So there's a hole basically. And what happens here is that let's, let's maybe let's stick again to dye and water. So the water flows comes from the left and flows constantly. So with the constant convection to the right. So just enters the domain on the left hand side. So it enters the domain here and leaves, leaves the domain on the right hand side. When it enters the domain, it has a concentration of dye of zero. So there's no dye in the water. No dye at all when the water enters the domain. Here at the circle, here's pure dye. That means here, yeah, the dye is part of the domain and then gets transported due to the convection, transported within the flow direction to the right hand side. And therefore, also most of the dye will be behind the circle here. So in the direction of the convection, but due to the diffusion, also some parts of the dye spread to the upper and lower boundary. And so this is how a solution looks like for a rather moderate diffusion coefficient. So as I said, here at the circle, the dye is added and then is transported within the direction of the convection. So most of the dye is behind the circle here and a little bit spreads also to the top and bottom edge due to the diffusion. And so here now, the diffusion is only one order of magnitude smaller compared to the convection. But if I now decrease the diffusions, you see that the solution changed. So first of all, it gets here a little bit steeper because we've got less diffusion, so less spread to the top and bottom side. And if I do this further until let's say 10 to the minus six, so now we are six orders of magnitude smaller, so the diffusion is six orders of magnitude smaller than the convection, we see the following. So first of all, we've got a steep slope here, which is in principle okay. But unfortunately, we've got here overshoots, we've got unphysical values, we've got values of 1.2, which means we've got a concentration of 120%, which does not make sense at all because we cannot have more than 100% of concentration. And also you can see here that we've got negative concentrations of minus 38%, which also does not make sense from a physical point of view. And these over and undershoots are also often called spurious oscillations, so this unphysical values. And my talk is basically about a method how to reduce these oscillations. So this is what I'm trying to explain you, a method how to reduce these over and undershoots, these unphysical values. So as also was stated in the title, I am using a discontinuous Galerkin method. So basically, one could also use the standard continuous Galerkin method. So I hope most of you know just the standard P1 continuous Galerkin method with continuous basis function, this can be used as well, but there are even more spurious oscillations. There are also other method that might be non-ostilatory, but they are either computational very expensive, or they are also lowest order only. So we are interested in a computationally cheap method also of higher order. And here comes discontinuous Galerkin into play. It is extreme flexibility, also extreme flexible due to the design of the method. And I don't want to go too much into detail here, but one important property that is also so that we need later in the talk that may explain also here at this or with this picture. So you see your basis functions. And you can see that the basis function in this triangle and the basis function in the neighboring triangle are independent of each other. And we see here also the discontinuity. So the basis function are discontinuous as the name says. And this can be also seen in the previous picture. You can see here the solution also, which is also discontinuous. So we haven't got a continuous solution anymore, but it can have jumps. So discontinuities along the edges of the mesh. And so this is in contrast to the continuous Galerkin method where the basis function also the solution is continuous over the edges of your triangles in this case. And yeah, as I said, the most important property for my method on the next slide is that the basis functions are, let me mention this again, discontinuous. So the basis function in this set are completely independent of the basis function of them in the neighboring cell. So they are completely independent. And this can be used now. So my or our idea for reducing the oscillations is to use a post-processing technique, which is and let me also emphasize this is computational very cheap. So it's a linear method that we apply after solving the standards discontinuous or with a standard discontinuous Galerkin method. And so if you want to take away just only one thing of this talk, this is just like so this is like the basic principle of what we're trying to do. This works as follows. So as I said, it's a post-processing technique. So the zero step is just solving solving with standard discontinuous Galerkin method and maybe having the spurious oscillations over here. And then the post-processing looks as follows. So in the first step, we want to mark all the cells. So we want to identify the regions where the spurious oscillations occur. So we want to mark the cells where this happened. And then after we have marked the cells, we just change the solution locally in this region. Yeah, two or yeah, by a polynomial of a lower degree and hopefully reducing or in the best case, of course, removing these spurious oscillations. And this is now where we need this discontinuity property because as I said, we can change the solution in this region without affecting the solution in the other regions. This wouldn't be possible with continuous Galerkin method. And this is also like, or this method is basically designed for discontinuous Galerkin methods. And here is where we need this discontinuity property of the basis function also of the solution. So this is basically the idea. And Volker and I published two papers on that where we investigated different post-processing techniques, so different slope limiters. And yeah, I think it's not like that important to go into detail with all the methods. But if you're interested, you can have a look at these papers. We investigate as I said several methods. And you can look up the details there if you're interested. But or that you get at least an idea of how these method in principle look like, I would just present you roughly two methods so that you have at least an idea how these methods could look like. And as I said, if you're interested in other, please look them up or ask me later. And to counteract publication bias, I want to show you a method that works not so good and a method that works quite well. Yeah, so let's start with one method from the literature. So it's a rather old method from the late 90s of the 90s of the last century by Kokkoy and Shu. And we call it linear reconstruction on triangles. So let me again explain this using this picture. So assume that this is your grid. So we've got these four triangles here. So this is the grid. Okay, in this triangle here and also in this triangle, the solution is constant zero. So it's a constant zero solution here and here here in the in the in the back triangle here at the end, the solution is constant one. Okay, and here in the middle triangle, we've got this parabolic this P two function. This is this the solution in so this parabola here or parabolic function is the solution in this middle triangle. Okay, so I hope you like get the idea of this picture. It's a little bit difficult to draw these things, but I hope you understand. And Lynn, we are going to work as follows. So we now we are in the situation on being on that cell and want to check if we want to mark that cell. So this is as I said, the first step in this post-possession technique, we want to mark this cell. And for this offer, we just evaluate the function here in the edge midpoint. So yeah, evaluate the discrete solution just in a single point in the edge midpoint and check if this value is too large, whatever too large means. So it's, it's basically compared to the to the integral values of the function and the cell also in the neighbors, but this is more like technical details. So we check the solution here in this point coming from the cell. So we are evaluating the solution in the cell at the midpoint and check if this value is too large, because this hopefully indicates some overshoots in this case. And after we have marked this, we change the solution to a linear or rather affine function. So this is why this called this method is called linear reconstruction on triangles, because we change the solution locally to a linear function. This this method basically computes three different options and then chooses one. So this could be one or this is one option, how how the solution is constructed. But as you can see, it's a it's a affine function. So a P one function, we change the solution to a P one function. So this is one method, or basically, or roughly the idea of one method. The next method, we call it constant jump norm. And yeah, this is like our method or one of our method. And here, the situation is similar. So again, we've got the constant zero solution on this triangle and this triangle. So here and here, and constant one solution here on the on the on the triangle at the at the back. And here in this middle triangle, the solution is again, a quadratic function, a quadratic function. And we now want to again check if if we need to mark this, this cell. And that as you can see, we certainly want to because we've got some overshoots here. Okay, and as I said, the solution jumps between these triangles. So whenever I'm on an edge, I have let's say two values of the solution, one, one, one solution on the left triangle and one on the right triangle. And, and but what we can of course, compute is is exactly the jump. So the difference of the solution of in this triangle and the difference in this triangle. So we just take the difference of these two solutions. And then basically, we integrate these jump along the edge. So we're computing the integral. Or yeah, basically the mean, the mean value of the square of the jump. And want to compare this again to some reference value. And want to check if this jump or this mean jump is too large, then we might again have detected spurious oscillations. Okay. And, and after this cell is marked, it gets just replaced by a constant approximation that is the integral mean. So basically the integral mean to to to preserve the mass. Okay, so we just replace it here with a constant approximation and contrast to the previous method, which has used a linear or affine affine approximation. So yeah, these are now two methods. And maybe let's see how, how they work in in our experiments. So soon on the next slide, I'll show you the results again for the Hamka example. But we want to somehow evaluate what is a good solution. So which solution is good, good. And basically want to measure somehow these spurious oscillations, these over and undershoots, we want to somehow, quantify how much are we off? Or how many, or how much unphysical values do we have? How many unphysical values? And for this, we've got these two measures called OC max and OC mean. I guess you'd also don't have to understand this formula. Maybe let me just explain. So OC mean the idea of, sorry, of OC max. And it's called OC max because because it tries to measure the maximal appearing oscillation. So again, this picture here, let's say here, this is the largest value that the discrete solution has. And so we take this value and compare it to the to the maximum value of the exact solution of the real solution. And yeah, so this is one, one contribution to OC max, we also do this with the minimum. But basically, this measures like the maximal appearing over and undershoots. So this is what OC max does. But if you compare like the left solution and the right solution, they have the same OC max value because it just takes into account a single point. But certainly like this solution looks like worse compared to the right solution here, we've got less, less oscillations. And for this, we've got the second, the second quantity called OC mean, which measures like the mean oscillation. So yeah, it just computes the mean of all oscillations occurring occurring in the domain. Okay. And so yeah, and we've got these two measures to have to have an idea of of maximal oscillation and mean oscillations. And yeah, now as I said, we tried out the methods and rate these method using our measures and compare this to the standard DG method without any post processing, which I call the lurking. And just to remind you shortly, we tried this out at the Hamca example. So again, this is the rectangular domain without the circle. So the circle is not part of the main, the water flows from the left, constantly to the right. And so this is like how, how a reference solution would look like. So we've got here no over and undershoots. So this is what we want to, or this is like a sketch of the reference solution, because the exact solution in this example is not known. So this also just like an approximation, but a sketch of the solution. Okay. And let's come to the results. So as you can see, so this is a snapshot from the paper, as I said, we investigated several methods, but since I just talked about the first two method compared to the standard Galerkin method, I also want to just represent the results of these two methods. If you're interested in these, please ask me later or look them up in the paper. And where we can see here is the two measures. So OSCMEAN and OSCMAX compared to the number of degrees of freedom that we have in our, in our, in our mesh. And let's start maybe on the left hand side. So the mean oscillation. This in the blue line here is the standard Galerkin method. So the standard DG method without any post processing. If we apply Lin-Trierico, so the linear reconstruction, we certainly reduce the mean oscillations. But not as much as if we, if we use constant norm. So even though that it's, that it's right in, or quite an easy method, this reduces the mean oscillation significantly compared to Galerkin and also compared to Lin-Trierico. So Lin-Trierico, as I said, also reduces the mean oscillation. But if we have a look at OSCMAX, it even increases the maximum, or maximal occurring oscillations. So we're reducing the mean oscillation with Lin-Trierico, but at the price of introducing even larger maximum, or maximal oscillations. We'll also see a picture on the next slide or second next slide. But yeah, so if you have a look at the maximum oscillations, Lin-Trierico might not be the best choice. But constant norm also here improves the situation significantly from around like 30% off. We are just now only let's say 10% off, which is well better on the one hand. But on the other hand, we're still not able to remove the oscillations completely. Unfortunately. This was P1. I forgot to mention. So this is like for P1 affine basis function. We also have higher degrees. So for instance, P4 and here the situation looks similar. So again, in the mean Lin-Trierico and constant norm reduce the oscillation significantly, while constant norm is way better than Lin-Trierico. And compared to OSCMAX, again, Lin-Trierico introduces even larger maximal oscillations and constant norm here really, really significantly decrease the oscillations also in the maximums norm or measure, let's say measure. So here we've got like 60% off and here we are only like 5% off 4 to 5% off, which is way better. And so that you have an idea how this looks like, how the solution now looks like, you see the standard DG method without horse processing with these over and undershoots. And here you see the solution of Lin-Trierico. And I hope I'm not sure if you can see this. But if we have a look here at these overshoots, we can see here that most of the overshoots are detected and also corrected. But here there are some small peaks. And this is exactly the position where we unfortunately introduced larger maximal oscillations. So as also the measures suggested in the mean, we've reduced the oscillation, but at the price of introducing even larger maximal oscillations. And the second solution here looks way better. So as I said, we are only 4% off. And it detects all the, so like the region with the spirits oscillation and change the solution to the constant value, to the interval mean. And this is how the solution looks like. And compared to the standard Galerkin method, as I guess you can see, this is just way better as also our measures suggested. And that's basically it. Let me just shortly summarize. So we saw that the standard DG method may show spurious or large spurious oscillation for convection dominated convection diffusion equations. And we saw several or at least two slope limiting techniques that are a cheap way to significantly reduce these oscillations. But on the other hand, even though we reduced them, none of them is able to remove them completely. And in the future, we want to have a look at maybe a parameter studies because some of these limiters also depend on parameters. And yeah, there we can like have a look if we can further tune these methods. And we want to also investigate what happens if we combine several of these methods, if this brings us better results, maybe. So with this, I'd like to thank you all for your attention. And I'm happy to take questions. Thank you, we have time for a few questions. Anyone? Thank you. I have a short question. This this velocity field for your example. This is not constant because they had to go around this circle or something like that because we were not able to do it. How is this constructed? Like, like the the convection term. So this B is a constant value of just one zero. So it's basically what happens at the circle, by the way. So but then it's not divergent free to go somewhere around the circle. Yeah. Okay. So it's good. Yeah. But then it's, it goes there and then how it goes to be left to right only by diffusion. Yeah, to the to the top and and bottom goes only to the only due to the diffusion. Exactly. To the to the top and bottom edge, the concentrations, but only due to diffusion. Yeah, okay, maybe it's stationary. The flow goes through the obstacle. I'm dependent on it goes from left to right and then it's a different story. The main advantage of this continuous galactic now to go to higher order, it's easier than to as for other methods. And if you go to a higher order, what about the slope limiter in that case? What do you mean? I like the slope limiters work like for P one, but also for P two P five, no, even higher you go to six, seven order. So we investigated the slope limiter until four. So of order one, two, three and four, and the results are all similar. So constant norm reduces the the oscillations. Yeah, pretty much. And Lin Trierico also not that good. So it introduces larger oscillations, but all these methods work also for higher orders. Yeah. Okay, thank you. Does this answer your question? Okay. Okay. Anyone else? Maybe I have one. Sorry. Okay. Then you go first. So how does this post fix solution affect conservation properties of your scheme? And how well does one understand the origin of these oscillation? I mean, there's related to a spectral representation. It's a Gibbs phenomenon as a host. So you cannot represent it is continuous solution in these space functions. But like fixing it after the fact, is that the best approach or does it actually preserve the properties you would like to have like conservation of Mars and so on? This is this is this is preserved. Okay, guaranteed by this method. So by construction, the mass is conserved. Yes. So we just we just lose some like order of approximation in this discontinuous regions. So this is basically what we lose, of course, because we we just clip the polynomial degree in this in this regions, but the masses preserve. Okay, so if you have no more questions, then I think we're done with this one. Okay. Thank you, Derek. Once more. Thank you.
|
Approximating the solution of convection-diffusion equations in the convection-dominated regime by standard methods usually lead on affordable grids to unphysical values, so called spurious oscillations. Standard discontinuous Galerkin methods are known on the one hand to produce sharp layers but on the other hand are also not able to prevent the pollution of the solution. This talk introduces a post-processing method that uses so called slope limiter to automatically detect regions where the solution is polluted and to correct the solution in these regions. Several slope limiting techniques are presented and tested on two standard benchmark problems.
|
10.5446/57507 (DOI)
|
Okay, welcome back to the second part of the second session. Our next speaker is one week. Your slides. Yeah. I don't know. The slides. Okay. Okay, so my quick. We'll talk about the, well, you see the title and there you go. So, yesterday only like I talked about the infield theory, which has been used in astrophysics. Okay. Which has been in use in at least since the 1960s. And, but not just in in galactic magnetic fields which is a more recent applications, but it was originally used in the context of the sun and stellar activity. And, if we look at the sun, then we see the upper left is the sun in the visual light. That's what you see from the earth through telescope. So you see a rather boring orange disk, but there are some dark spots on it. And those are the sunspots. And they have been known since the 17th century, at least, and have also been frequently observed in other spectral ranges. So if you send this telescope up in into orbit, then you can also observe the sun in the ultraviolet and then you see it's not a boring disc at all anymore, all sorts of stuff going on, all caused by the solar magnetic field, which is mostly the sunspots, but which punches through the surface in some places, as we can see in the lower left picture, where the white and black specs basically denote the different polarities of the magnetic field, and we can particularly see that the magnetic field is strong where the sunspots appear. And now, why are the spots interesting, because they have been observed for so long so they give us a trace of how the solar magnetic field has evolved in time. And in particular, in the last 140 years or so, they have been very frequent in systematic observations. And so the lower graph here shows us that the area, which is the essentially the number of sunspots but this is the area here that's covered by them varies with time in a cycle way. So there's roughly an 11 year rhythm, and sometimes you've got a lot of sunspots and then you don't have any. So the upper part of the, of that graph shows us the also the position, so that's the number and in position of the sunspots. And here we also see that there's a certain pattern, which, after which this is named the butterfly diagram, very famous graph and so we see the sunspots appear at higher latitudes at the beginning of the cycle and low latitudes, basically at the end of the cycle. And then we have a gap and then the next cycle begins. And what you can't see here is that the polarity of the magnetic field switches between two of these cycles. So in reality, two of these butterflies consisted one cycle of a total cycle of the magnetic field. And another peculiar thing about the sun is its differential rotation. So the equator rotates with a significantly shorter period than the poles. So when the equator does four rotations, the polar caps to only three in the same time. And that is a pattern that you find in the outer 30% of the sun, which is the conduction. So that's the layer in the sun where energy transport is done by conduction. So you have rather different things there. And these are of course considered or thought to be the also the source of the magnetic field. For stars, we don't have so detailed observations. So this is an example of a technique that's called the Doppler-Seaman imaging. So this is basically images that have been inferred from changes in the spectrum of the star, which is fairly complicated and not at all straightforward method. And what we see here is a completely different pattern, of course, we see very few large spots which are all over distributed all over the star. But this is of course also a different type of star than the sun. But this is what you usually see when you get this type of image from stars. So things can be quite different there. It doesn't look actually symmetric at all, for instance. What can be done more easily is and has been done over a longer period of time is you can watch data activity, because there's a proxy for it in the spectrum of the star. That's the emission cause in the calcium two H and K lines as they are called. So it's the feature in the spectrum of the star. And you can from the size of that feature you can infer its strength of its magnetic field essentially. And here we also see, okay, in the upper right we see the sun for comparison and we see all sorts of types. So some show also some cyclic activity and some also don't. So basically what we know about the sun and stars. Now for the sun, the first model that was done to explain the magnetic field of the sun and the butterfly diagram and also that sort of thing is a so called alpha omega dynamo, which is an interplay between a helicity in the conductive gas motions and the large scale, the differential rotation, which will wind up so the differential rotation within basically wind up any radio or the unit magnetic field into a toroidal field around the crater basically, and that then can again create a new polluted field when there's helicity in the gas motions. And there are problems with that. When you apply to the sun, which we'll discover later but that's the most famous dynamo model, which usually creates axisymmetric fields, which also oscillate. So you'll get some kind of time a butterfly diagram but not necessarily the same as for the sun but in principle you can explain a lot of things that we see in terms of the sun in the sun. The left one is a so called alpha square dynamo. That's created by a helicity in the gas motions alone. So now there are both the so called toroidal and polluted field are created by the helicity of the gas and so called alpha effect. And this dynamo tends to create a different type of field geometry. So in this case we usually get non axisymmetric fields, which are stationary. And that's the most famous types of dynamo, which can be applied to stars. Unfortunately, they are kind of hard to compare to observations and kind of stars. We have a lot of detailed observations for the sun but for our stars. And that's also easy. Then there's the so called flux transport dynamo where there's also some kind of alpha effect, the differential rotation still plays an important role. And basically one large advection cell hemisphere in the connection zone of the sun. And then that creates this tilt of the wings in the butterfly diagram. This is the one I did a long time ago. More recent ones look much nicer, but they also have had much more fine tuning applied. So that's the downside of that model. So these are some generic types of mean field dynamo's. You can also apply mean field theory to Anglian momentum transport that is a bit less well known. So I write down the equations for this. So we have in that case we have conservation equation for Anglian momentum. So this is already the average equation. And then the flux of Anglian momentum has two components, maybe our axisymmetric here. We have an advection term. We have the Reynolds stress. That's the second term there qr5 q theta five. This has been written out here in the bottom. So this is a typical result from these analytical works that have been done on this kind of phenomenon in Potsdam starting in the 1960s I think and throughout the early 2000s also. So we have rather elaborate expressions for this, depending on certain stellar parameters and of course the rotation rate. What's important here is that if you look at qr5 for instance, then the first two terms are just viscosity. That's the good old turbulence viscosity. That's a very old concept. But as we have stratification and rotation, we are in a rotating stratified conduction zone that leads the Coriolis force actually leads to the creation of a term that's proportional to the rotation rate of the under velocity omega itself. That means you have a non vanishing stress even for rigid rotation. So you can't have solid body rotation, basically when that is non zero. And so that will of course create such a differential rotation within the conduction zone of the star or sun. And we have applied this of course, and to the sun, and the result is here, and looks pretty good. So, in this case, mean field theory is really very good. Agreement between this theory and and what's observed in the sun. And also the same model also gives us the merit on the flow, which is what's supposed to do with the affection in the dynamo. There we don't have really reliable observations so we have it, we know it on the surface roughly it's 10 to 20 meters per second, which is very slow. Because we are basically, yeah, the conductive velocities are so much higher there and so these are kind of hard to observe. And there's also still some controversy about the geometry. So this is the one cell geometry, this is what our model commutes. But some observers also claim, ah, no, there are two of them, or the return flow is in the middle of the conduction zone and so on and so on. So the jury is still open on that one. But this is what we find. Yeah, and then of course one can do the same for stars. The two line is basically the, yeah, one has to explain that that is the difference between the fastest and the slowest rotation on the stellar surface. So in in angular velocity again. And we are depicted by our model as the function of the stellar type. So effective temperature is also, if you do a zero H men sequences essentially a measure of the mass or of the luminosity of the star so to the right we have high temperature which also means very shallow convection zones. And we have low mass stars with deep convection zones. And we see this dependence on the on the effective temperature essentially there's of course, around other parameters so if you vary the rotation rate, or the age of the star then we will also see a bit of change there so but yeah that's what we find here for for the sums and three days and the green data, the green dots that's data data from the Kepler satellite. And which has produced hundreds of thousands of light curves. So very, very large number of stars, because which was a huge step ahead because differential rotation of stars isn't the data was very scarce 20 years ago, this way. And so this has been derived from light curves under the assumption that the variation in luminosity is caused by spots, and then you have spots at different latitudes will rotate with different periods, and then you can derive a differential rotation on that. So our agreement is fairly good, although the scatter and the data is of course very large, one has to admit. But at least it doesn't contradict the model. So, also on that point, mean field theory seems to be doing very well, even with these relatively simple analytical models which are based on rather strong approximations. There is some controversy about so called anti solar differential rotation. So other stars where the polar caps rotate faster than the, than the equator. There are to my knowledge, no confirmed detections really but some candidates are our mean field models have never produced that. So if that's the case, if that exists, then apparently that's something the model doesn't do well. There have been direct, the direct numerical simulations, which find anti solar differential rotation for slow rotation that's a trend in these simulations I'll come back to that later. And that is also an issue with those simulations. Yeah, so what we did to investigate this some further was. Okay, let's start and do some box simulations. And then we do investigate whether maybe the rain is stress the mean field expressions for the rain is stress are wrong. For for slow rotation, we can confirm that. And so, use the Cartesian box. The code is a Godunov type, finite volume code developed by Udo Ziegler in Potsdam in our group solves the equations of compressible mhd with adaptive mesh refinement. The code is more suited to our interactive interstellar gas and that sort of thing but it works quite well in this context here to offer conduction zone. And yeah we use a piecewise polytrophic setup and so on and so on. So we've got gravity basically along the. Yeah, what's what's drawn as the long side here. So the vertical picture. Yeah, okay, this is a mathematical formulation. This is how density temperature and pressure look like as functions of hate. And the daily number as an important input parameter which is usually 10 to the seventh. Ideal gas and so on so constant gravity. That's the setup for the simulations. Yeah. What we then get if we run that for a while is a flow pattern that looks like this this will of course look familiar to too many people, which is essentially been our convection but you see some extra structure. At the borders of the convection cells that's called a course by rotation. So our box rotates and yeah what you see here essentially the light colored parts are basically upwelling you've got slow upwelling flow in the middle of the conduction cell and you've got the fast downward flow at the borders of the conduction cells. And the flow there also converges. And that of course is important we have a strong asymmetry between up and downward motions. And that's why you can have net transport effects. So, if you average, then of course mass transport must be zero. But the velocity doesn't have to be, for instance. So this is why you can have and also the alpha effect for the dynamo for instance, lives on this asymmetry between up and down flows. Yeah, so this is what in this setup the velocities look like. So we have an anisotropy between the radio, radio or x direction which is where the stratification is and where the gravity points. And in this case also the rotation axis. This one was done for a, and we have the horizontal velocity components which are just half the amplitude or something like that. Yeah, and then you can do a parameter study to a lot of runs and compute the rain is dressed from that. The left shows the radial transport of angular momentum, which is always negative as you see here. But in the right, where we see the horizontal rental stress we actually have a case omega equal three here, which is negative. And that really could create an anti solar rotation. And that has been illustrated in the next you graph. So here in the left you basically see this H one is positive. And so we have something that looks roughly like the rotation of the sun. But in the center and right, where we have negative or zero H one, we actually have a fast rotating pole. So if we really have a rental stress that has this kind of shape. Then we will indeed have a fast rotating polar cap and a slow equator. So that's what we find with a box simulations. The next step is to do the same in spherical geometry box simulations work nicely but they have, of course, one downside, and that is rather artificial boundaries in the horizontal direction. Because they are supposed to represent some piece of this spherical conduction zone, and then of course the boundaries are somewhat awkward. So if you do that in a spherical coordinates, then of course the boundaries are more in line with with with the global geometry. So the code we use for that is also Nirvana which has been done in spherical and cylindrical polar coordinates as well. The trick is that's why it's called wedge you don't do the full sphere, but only some range in latitude and in longitude. And in principle to all the whole range of the whole circle in longitude, but there are of course difficulties close to the rotation axis. And that's why one usually prefers to exclude that. Yeah, and oops. I haven't done this to work in progress or I haven't done extensive analysis with that yet. But one thing one can illustrate with it quite nicely is how the conduction patterns, changes with rotation rate. So upper left is no rotation or extremely slow. And then you have a somewhat faster rotation in the upper right. And that still looks fairly busy nesqu. Although there's already some elongation of the conduction cells. And in the bottom left, and then you see cylindrical conduction roles in the at low latitudes, and also a completely changed pattern in the polar caps. And that is even stronger in the lower right, which is even faster rotation. So this nicely shows already how con global rotation affects conduction in these stellar conduction zones. Alternatively, one could also try global simulations that needs a different approach numerically, usually some pseudo spectral codes so I tried the radio code, which is freely available, which is, yes, to the spectral spherical shadow geometry uses the so called an elastic approximation, which is somewhat tricky. This allows for a stratification, but not for sound waves. So this is a very special approximation which is popular in the theory of stellar conduction zones. And yeah okay if you take a solar model and put that into this code and run it. Then you find something like this. And best I could find all the closest I could get to the real sun as far as the parameters go. This is the same rotation rate as the sun. You get again some nice velocity pattern there. And a very complicated magnetic field, which is however somewhat dipolar so you see in the northern hemisphere, it's predominantly red and in the southern it's predominantly blue. So there is at least a dipole component in there. And this also shows up in the as the mutual averages on the right. So the magnetic field in the bottom looks fairly dipolar with two toroidal field bells. This looks actually quite nice. But the differential rotation is, of course, not correct. We have indeed anti solar differential rotation, but for the sun, which is a bit weird. But it's a very nice Mary John with low one has to say. And this can be aia and butterfly diagram that isn't really a butterfly diagram either. If you look at the magnetic field. One can fix the differential rotation by just letting it rotate faster. Then this looks here we have this Omega here, then we indeed have a fast rotating equator. And the marid on the flow is gone. And also the magnetic field looks a bit weird. It's not a nice type or anymore. So there's a bit of a conundrum there. This is of course known in the community. I'm not the first to discover this. But there is some something weird with with these global simulations so far. People are still wondering why probably it's a lack of resolution. And the first with these models is over with simulating stellar conduction is that the really numbers are way too small compared to real stellar conduction zones. So in real stellar convection zones that have something like 10 to the 20th, and in simulations you can do maybe 10 to the seventh. And that's a that's a huge gap. So one is probably lacking resolution. Another thing is of course, there is a layer at the top of the solar conduction zone that's called the subsurface shear layer, which has strong differential rotation, which is somewhat different from the rest of the spectrum. So there the stratification is very strong. And also the flow, the Mach number of the gas flows isn't small anymore. So the an elastic approximation breaks down so you can't really do that layer in with this kind of code. So maybe an issue also one doesn't know. So this is an ongoing problem. And, oh yeah, but also for this faster rotation you get something like a butterfly diagram. It's not the solar, the solar butterfly diagram but at least you do get an oscillating field. You could argue it's closer to what the sun really does. But again, still a long way to go before we really get what we see on the sun. And with that I can conclude just one remark maybe these wedge shaped things are the way forward. The problem with these spectral codes is that of course, you don't really reach very high resolutions because computing time goes up and they don't scale very well anymore with really massive clusters. So these. So we will have to, we will have to see what the future is there but yeah, that's where we are right now with modeling set up on action zones, and attempts to go beyond the mean field theory. Thanks. Thank you very much. Questions. Anyone. Thank you. Did you try to do some comparisons between Nirvana simulations and the Rayleigh simulations for example to see the effect of the compressibility compared to any. I haven't got around to do that. I haven't. This is fairly new work in process. Okay, anyone else. If not, okay, then thank you once more.
|
Mean field magnetohydrodynamics is a theoretical framework that uses averaged versions of the induction equation and the equation of motion to model large-scale gas flows and the generation of large-scale magnetic fields in astrophysical bodies. This method is computationally cheap and has been used with some success in astrophysics but requires a theory of the effect of the small scale gas motions on the large scale motions and magnetic field. More recently, advances in high performance computing have made direct numerical simulations feasible. We show results from both approaches.
|
10.5446/57509 (DOI)
|
So, yeah, thanks for the introduction. So, yeah, now we're diving a bit deeper into the power of this topic but don't worry for those of you that are not familiar with it I tried to make it as understandable as possible. So, in the formal form for grid from actors. In the first part of the talk I will kind of introduce what we're actually talking about what what what would we mean by grid forming control what is why why do we need it. In the classical power grid we have the situation that we have these large synchronous generators that kind of self synchronize themselves and the dynamics that that is given by by the electromechanics of these machines and the large rotating masses. And when we when we're introducing more and more renewable energy sources into the grid. The situation is, is a bit different because these renewable energy sources coupled to the grid wire inverters. Here, for example, in the with the wind turbine wind park. And then produces AC power but this is not directly synchronized with the grid, as we have it with conventional synchronous machines. Rather we inverted to DC power and then from DC to AC again with the correct voltage amplitude and the correct frequency. And yeah actually how these AC oscillation. So, on the inverter is working. This is inverters just a power electronic device so there's no mechanical physics as in the in the synchronous machine so every dynamics that is happening there can in principle be programmed in the control scheme of that inverter. So, kind of we have kind of a freedom here what what we want to what dynamics want to impose here and for now, most of the renewable energy sources that are collected to the grid are in a so called grid following control mode. So we have these. Here's here's our grid, and what the grid following controls basically we have the so called typically a face log loop algorithm that measures just the, the face and the frequency of the grid. And what what the control does is then just yeah okay push push the active power we produced with our with our wind park with this specific phase that we measure in the grid into the grid. And this is how most of renewable energy connections are working now. So, as long as we have a synchronous machine this is this is totally fine because our grid following inverters can follow the signals that we have from from the large generators we have still have in the grid. And as we're removing more and more of these conventional machines and closing down cold power plants, etc. We're having the problem that a grid with only grid following inverters is not stable by itself because what what signal should they be following. And this is why we need grid forming control schemes and this is also what we see in practice so here's the open letter of the four big transmission grid operators in Germany and they wrote this statement, which is entitled need for grid forming control concepts. And they say a quote, a stable system operation is only possible up to a certain ratio of today's grid following converters to synchronous machines with the aid of grid forming control content concepts was demonstrated that operation with up to 100% power electronic base generation is possible through a successive introduction of grid forming control concepts, the stability issues mentioned above can be reduced accordingly. And these reasons the German transmission system operators see a compelling necessity that all you converges connected to the transmission system are exclusively applied with grid forming control concepts. So this is. Yeah. So the people that have to handle the stability issues in practice every day they are now really saying okay we need this and we need this as soon as possible. So yes, as I said in the beginning and principle you're free to design any control scheme. You want with these power electronic devices. And so they're in the literature many proposed for control schemes. For example, through control screens that have a linear dependent impose a linear dependence between power and frequency and reactive power and voltage amplitude. There is the so called concept of so called virtual synchronous machines that just try to emulate the dynamics of the machine currents machine in every detail and model the magnetic fluxes in really high order dynamic models to just mimic the behavior of a conventional generator. And there's also this concept of virtual oscillator controls where you just take your knowledge from from oscillator networks and from from the knowledge we have on their self synchronizing behavior and I don't know, make a make a dynamic make a dynamic stuff for example, Stuart under oscillator. So these, what I want to stress with this is these grid forming control schemes can be completely. There's a large variety of things you could do you the only goal that they all have in common is that they want to regulate the voltage amplitude to a constant level in the operation state and to have a phase angle frequency that is rotating 50 Hertz. So, the question is now, if we're putting a lot of these grid forming control schemes very distinct types of them together in a grid. And we may not even have a knowledge on the detailed models or parameters of these of these individual power electronic devices how can we study such systems how can we understand them and and the the collective behavior because they're done and they're interacting on the grid. Right. So, this is the reason why we come up with a normal form. This is now the second part and now it's getting a bit more mathematical but I will try to walk you through slowly. The goal of the normal form is to derive a model that is able to represent approximate the dynamical behavior of any type of performing actor. So we really try to think of from first principles. What are, what are the things that these things all have in common, and don't care too much about the details of the individual control or control schemes just think from the general properties things. So, okay, what, what are variables are variables are fundamental variables in a power grid always voltage and current, and we model them as complex variables. And I mentioned that in real power grids we usually have three phase grids with three phases that are shifted by 120 degrees. And it's only possible to represent such a three phase grid by a complex value, if we have a, if the faces are balanced which is usually the case so as long as we don't have a break the symmetry of this balance so face to face short circuit for example, we can map this three phase dynamics to a complex cluster. Yeah. So, this, this is the very, we use, and then, then we have a network of power grid is just a, it's just a network, and the coupling on the network is happening, why are the current and the core flow. And here, just to take it simple we assume that these, these flows are static. So the, the current at a note K is just the sum of all, overall incoming links or lines. And is basically given by by the face angle by the by the voltage differences on these lines and so we just define such an admittance nodal admittance matrix so that's fairly standard so this in this matrix there's the graph structure of the of the power grid captured. And the flow is given by, by the voltage differences on the slide. In principle, we could also go for electromagnetic trunnions on the lines here then we would have differential equations for for the, for the currents but to keep it simple which is assumed that everything is also static here. And now the assumption we make is that this physical connection via the transmission system is the only connection that the, that the actors in the grid has so there's no additional communication layer that couples the dynamics at different nodes in the network so this is one of the assumptions that we make. And then, if the dynamics is smooth enough, and we can write this down as a ordinary differential equations so you here's complex water so differential equation in complex space. And it might depend on some internal states so access a vector of scalar variables. You and it's a corner good as well as the occurrence that come in via the lines and that corner good so this is the most general form you can write down if we assume that there's only coupling by occurrence and that the dynamics is smooth. So now, if we look on the operating state, because this is what the control system is trying to gain here right. It's like the in the operating state we want that the the the voltage amplitude is constant. And that's why the derivative of V has to be zero and we want that the phase angle derivative is equal to the nominal grid frequency so 50 Hertz. So that's kind of the condition for our operating state. And the internal variables should just have no dynamics in the operating state. So, what we see here is that this operating state possesses a certain symmetry the one symmetry with respect to the to the face. And we want to use this now in the following and the fundamental idea for the normal form is that if we are close enough to this operating state and if our control system is working well enough to keep us in this in this area in this vicinity of this limit cycle. We assume that the dynamics also possesses the symmetry, and also, even for for a design of a control system that wouldn't make too much sense to to design a control system that's not homogeneous on the face because this this wouldn't actually make any sense so we assume that the dynamics itself also possesses the symmetry. Now, we choose invariant coordinates and so coordinates that invariant and other symmetry. And we choose here variables that are physically meaningful, which are, we're at voltage amplitude active power reactive power. So this. Yeah, but just as the remark would be possible to also pick other combinations of coordinates here. And this is, this is still what we have to do and figure out what, what else we can choose and what we can do with it. So, now, going back to the symmetry condition, we assumed we know what the dependence on the complex voltage is. And if this, if we if we hold this symmetry, then then the only way that the voltage dynamics is the dependent on you is given by you times some function that is independent of you and action depend on the complex voltage itself. So we kind of are at a state where we want to do approximations. So with, we assume that we're in the vicinity of the limit cycle of the operating state. And we do a tail expansion. So we have a vector of our variables a vector of invariant variables. And we do a tail expansion in these in these variables up to first order. And the basis of the assumption here is that the control system keeps the system close enough to the to the operating state and then this is this is valid. And then the end we end up with what we call the normal form. And we can see that this, the specific thing is here, the dependence on the on the complex voltage that we can write it here is you dot over you and the right side just depends on these invariant co audience, which are defined up here. This is the most general so and all the ABC G HR parameters and parameter matrices. So, in principle, there's a high parameter space and now we already started to investigate this parameter space and find conditions where where the stability of the limit cycles guaranteed and stuff like that but also starting to do numerical experiments now to study the stability conditions but now we have a model which is fairly general. And we can start to investigate this parameter space and analyze what's actually going on there so this is work in progress. And that's that was all with the with the equations and now we come back to nice plots because the last part is on data driven modeling. So, turns out that this fundamental structure of the normal form this captures this this underlying symmetry that that we kind of have in all these grid forming control schemes can also be used to get to a data modeling approach and this is a bit different to what I presented in the morning because that was a pure black blocks approach, it didn't assume any model and now we're having this normal form which gives us some information we know that because of symmetry. So, we have certain certain structure but we're still trying to learn the parameters so the goal is to build a great model by fitting this normal form I just shown you to two measurements with taking out and lab measurements of the grid forming in virtual. So, the microgrid setup is as follows we have AC source at bus one then there's a line connection so this line is in reality emulated by series of resistances and inductances, the voltage source inverter at the bottom right the grid forming in virtual. And then it's connected by a transformer to the grid, and then there's also a load, and I also put a photo of the lab. So you see it's actually not that interesting because everything is just hidden in in just these cupboards and the grid forming inverter is actually in this black box standing there but it's not too interesting because you actually don't see that much. Yeah, simulations are started just from from this computer there. So and what we're now doing is changing the frequency at the AC source and measuring at bus three, the complex voltage and the complex current. So we're going to try to infer the dynamics of this voltage source inverter which is actually very complex it's built in the lab at technology like labs where we, where we took these measurements and try to fit this with a normal form. So these are the measurements of current and and voltage so we have here measurements snippets of, I think it's 80 seconds. And in the beginning, all the signals are constant because we're transformed the measurements in the in the co rotating frame of the synchronous and everything is constant. But then when we change the frequency at the AC source. The system is oscillating faster than than our reference frame and this is why we see these oscillations, and then we put these frequencies down again and do this a couple of times with a different slope and also we see here that there's actually happening some transient dynamics when when doing this. And this is what we also want to ideally capture when we fit our model. And this is shown in the last plot. Here you have with orange is is the normal form with the fitted parameters. So, if you're interested in how we do this we can do that and how we did the fitting we can speak about it in the question around. It's a bit more complicated, but we fitted this model to the data and orange is the model of lose the data and see that we actually have a quite nice fit here and but but we see also that in the frequency. So now we have some some overshoot transence here in the model that are not there in the data. And this is due to the fact that we just chose one internal state of the normal form, which mimics the frequency dynamics, but in reality there were many filters implemented that are dampening these oscillations. So, if we would increase the complexity of the normal form model by adding more internal variables we would probably also be able to capture these dynamics more precisely, but what is nice that in the in the in the voltage amplitude dynamics really see here that this transient oscillating dynamic behavior we're actually able to fit with the model. So this is pretty nice. And this is already the end of the talk. All that I've presented is actually in can be found in the paper, which is available on the archive and the moment in review and here is energy. Thank you for listening and I'm over questions. I would like to understand the properties of the non-formatization. Is it linear or non-linear? But when you show the linearization, it looks like linear. I should have mentioned this, that's kind of the clue about the sole thing. So we're choosing smart invariant variables to do this linearization. So it's linear, the right hand side is linear in these invariant coordinates. But yes, it's obviously non-linear in the voltage and current. So this is definitely a non-linear model and this is basically capturing the most important non-linearities around the limit cycle of the 50 hertz oscillation that still have this symmetry. So that's what it basically is. So I have many questions. I'm going to go over grid forming controls that you mentioned, they all would satisfy symmetries of course. Yeah. But not mostly in these particular normal forms also. So we, it's not, I mean, this normal form is just an approximation, right? Exactly. So yeah. I mean, in principle, we could also include higher order terms. But we're, I mean, I didn't show that but we also did this approximation for specific models for voltage oscillator controls and for drew controls and calculated this kind of approximation and the parameters. And if you compare the dynamics, it's fairly similar. So in, in, if the perturbations are not too large, this captures the dynamics very well. And none of this has to restrict the grid forming controls, right? We could have liked to, probably any device coming through a grid that should probably respect the grid. And that was kind of the, kind of the motivation to do it. Yeah, but that's something we want to find out also playing around with maybe finding other invariant variables that capture other components in the grid better and try to find out if we can extend it. And I was interesting, you know, a few decades ago, there was a literature study so that's using groups. And if you do, you know, you might have a, you know, yeah, the derivative of log of you. Yeah, just sort of raising all this should explain to you, which is just no one. Maybe there's some cool connections to be explored there. Yeah, exactly. I mean, if we, if we get rid of the, of the coupling terms, the P and the Q which are quite specific to power grids, it's also, you can also show that is basically a steward and oscillator dynamics. So this is because it stems from the same symmetry. So it's, yeah, it's fairly, fairly similar. So generally, generally, for a given kind of hardware converter, identify these parameters here, the normal form, then use the normal form for general theory and kind of study by applications in terms of the different parameters that show. And it's, it's a technology neutral way to capture all these things, right, I don't have to care about how the control goes in every detail I just can study the properties of the parameter space and say okay, in this regime, I'm stable and this I'm not, and this can be applied to any control that can be described this form. So that's the idea to have a more general approach to this thing. And now you don't have inside, say, which of your parameters, for example, are responsible related to these over shoots, which are probably the problems of these cascading problems that he has seen in the previous four and which kind of controller could avoid them in addition to. We have some, we have some idea if we do this normal form approximation of a detailed model of a certain inverter. In reality, most of the parameters are always zero because they just not all variables are connected dynamically to all. So in practice, the frequency dynamics is usually not coupled to the reactive power for example and certain parameters are always zero. And then you can also say, okay, this parameter has to be negative that it's stable. And yeah, and then then you can start thinking about whether it should have a certain range to, to avoid oscillating behavior or something, but this is something we're just starting to work on. There's no, there's some results on the paper but yeah, I also said kind of more say, or, or, the very beginning is that you want to build your controller on your local. Yeah. And before you show how the difficulties with the many consumers and produces in the media, networks need to much more like connected network time and the power of the network. So is this really what we will solve our problems is so the controllers will be actually need a network control, that information based control as I think most of the models. Yeah, so you can, you can argue about that. So, you always have to think about where we're coming from you're coming from a system that consists of synchronous machines that are kind of self synchronizing themselves on on low time scales there's also control schemes that regulate this on longer time scales. The thing is, do we really want to, if we have the chance to design a control structure that is the central and keeps this property that we already have. Shouldn't we do it. We instead had have to build a communication infrastructure to to link all these devices and that. So that's, which would be expensive and then you have to think about cybersecurity stuff and you have a yes so that's, that's the argument but there's also in micro groups, people are yeah the poster by Anna, who showed us showed these things about this second layer where you have a second control layer, but this is actually about a longer time scale so this is secondary control steam. So here we are on a very short timeframe of seconds, and to have this communication very efficient is a hard task so you actually want to have this teacher would argue. I mean, just to put a number on that number from at a conference from from somebody on the engineer side was that this is sort of reaction in the most extreme events that happened these reactions need to be fully bearing in the order of 10s or milliseconds so you don't have a couple of seconds to establish a consensus calculated nice response and then distributed out again. This, this is has to be this fast. There's other problems with communication which is one of the classic promises one if the pain that fails, you need to switch it on again. So which do you switch on first and grid or the communication infrastructure. It's a bit like you shouldn't maybe at some point it shouldn't rely on there should be a high value there. But I mean, it's all of these are, I think also in for debate and active debate. I know the UK implemented a very, very, very fast sort of hundreds of milliseconds fast regional fast frequency response control scheme where they really do table him. So, I think that's a sort of collective measurement from an area and try to react very fast. But haven't. I mean, I was that there was like three years ago that I, the last talk to people about it and then catch up during the pandemic on what that, how there is operating. I don't know, probably I know small about it. Okay, okay, so other questions. Yeah, seems to be here then.
|
Future power grids will be operating a large number of heterogeneous dynamical actors. Many of these will contribute to the fundamental dynamical stability of the system, and play a central role in establishing the self-organized synchronous state that underlies energy transport through the grid. We derive a normal form for grid forming components in power grids, that allows analyzing the grids systemic properties in a technology neutral manner, without detailed component models. We provide a first experimental validation that this normal form can capture the behavior of complex grid forming inverters without any knowledge of the underlying technology, and show that it can be used to make technology independent statements on the stability of future grids.
|
10.5446/57510 (DOI)
|
Yes, so finally I have to introduce myself as a speaker. My name is Matthias Wolfram. I'm working at the Weierstrass Institute and I'm giving this final talk in the network session here and after that we have seen so many kind of really applied problems. I decided as a mathematician from an institute on applied mathematics anyhow to this I decided to go a little bit back to the basics and what I'm going to show you in the next half an hour is kind of a collection of some interesting results that we obtained during the last years where kind of the intuition that you might have from synchronization theory might fail in some cases under some specific circumstances. This is joint work with my colleague Oleg Omechenko who is now at Potsdam University Svetlana Gurevich from Münster, Karlolein from New Zealand and Jan Schrupp who is actually in the audience and all that is based on very simple models which are not really related to real life systems but which anyhow I think can showcase interesting non-linear phenomena that one should be aware of when dealing with coupled oscillator and synchronization problems in general. Okay so to be very basic I will tell you, give you at the beginning a brief introduction just to the classical Kuramoto model probably most of you know that very well but anyhow let's have a look. So this is Yoshiki Kuramoto and this is his model that he wrote down in 1975 so these are n oscillators which all have their own frequency omega sub k and they are coupled via this sinusoidal interaction function and it's just the global coupling so I pay for some overall oscillators in this coupling term and I do that without any coupling weights at the moment. Okay and since there is a minus sign from the coupling this interaction works attractive between the faces so all faces rotate with slightly different velocities but the velocities all attract each other and so this attractive all-to-all coupling finally overcomes so to say the inonginity of the frequencies and at critical coupling frames can see synchronization sets in this picture is just from the from the thermodynamic limit which I will introduce later and you see that the zero state here which is complete in coherence just gets unstable and we get a branch of partially synchronized states so this is the universal scenario for the onset of synchronization people from statistical mechanics also call that a second order phase transition. Here I show you just in the first line that this model can actually be rewritten if you go to complex notation so I here I switch back from the electrical engineering notation to the mathematical notation so I use again the imaginary unit and J is an integer index and not by supersets used in previous talk where I was covering J was the imaginary unit and one can see that this global coupling so the principle of the network where all nodes are just connected with the same strings can be replaced by a coupling to a mean field so instead of the summation of all these coupling terms at first sum these exponentials of the faces extract these complex order parameters and then I couple my system just to the complex order parameter this trick by Kora Moto is actually the starting point for a lot of nice analytical methods that you can use to treat this model which you lose immediately when you go to more general interaction functions. If you take a general interaction function in this tab where this Kora Moto's semi-soil coupling so to say only the first Fourier component in that function if you go to these more general facing to action functions then actually there is theoretical background that tells you that any system of coupled oscillators can be written in this way in the limit of V coupling this is a kind of others from say the sediment for CO4 for limit circuits and if you're more over assumed that the frequencies are fast compared to the terms that come from the interaction actually averaging COE tells you that you can write this interaction function as a difference of the faces and not asking anyone the full faces. So now let us come to the kind of non universal transitions to synchrony in the Kura Moto's Takaguchi model so two years after this fundamental paper Kora Moto who gets to use Takaguchi introduced this alpha parameter which is this face back in the interaction function and it turns out that all the nice machinery all the analytical methods work equally well however much more interesting dynamics can be used. When the face back parameter is a model smaller than pi over 2 then the coupling is still attractive so you expect synchronization for large amount of coupling and the general assumption that we make is always that the distribution of G of omega from which we draw these unknown or unhomogeneous frequencies is only a K that this is just a unimodal distribution with some distance maximum around which we expect the signal. And then it turns out that this was actually not only for us surprising that there are certain distributions J I will specify them later which are unimodal and where this classical synchronization scenario which I showed you here again in the top row of the figures changes and we see you as sort of unexpected behavior in the sense that for example if we there are situations where if you increase the attractive coupling anyhow the synchronization may decrease so here the whole problem for example goes down even to zero such that after some interval K greater than half partial synchrony incoherence regains stability and the only stable solution but there are also interesting cases of coexistence so in the middle figure you see some where stable incoherence coexist there's a stable partially synchronized state and in the lower row you see that also you see that they're probably you see in the next example which I will show later on where you also have the coexistence of two stable partial synchronized states. So how can we understand that let's briefly walk through the through the theory the first thing is you'll think about a large number of oscillators so we take the infinity in this case you can consider probability density which just tells you that at a given time moment T how big is the probability to find an oscillator with natural frequency omega at the position theta at an angle theta for this probability density you can write down a continuity equation in this continuity equation you have of course to plug in a velocity for this velocity I just take the right hand side of the chrono equation that I showed before so this is the second what's written there the only difference is that of course now for the new field I have now to plug in the continuum version so what was formally a summation over all the n oscillators is now just an integral of this state and of course there's an obvious normalization such that I get out the g if I integrate over theta as I told you there are nice analytical methods available for the systems of coromal height in one of the most powerful tools is actually the Orton-Untonson reduction which was proposed by Orton from Orton-Untonson in 2008 I think in 2009 to 70 papers in Kael's journal and this tells you essentially that this unknown density function f for which I formulate the continuity equation on the side in 4 can be substantially simplified maybe one can just drop the dependence on the angle theta and represent this kind of density profiles with respect to the parameter theta just by a single complex number and here is kind of the formula that you how that works but I recommend you not to look at the formula but rather at the picture that I show here so the red graph such as these distributions with respect to the angle theta so the horizontal axis is just the angle theta and this is f of theta and if z equals zero then we just have continuous distributions or complete incomparance of the local ensemble of oscillators if z is one and one of the z is one then I get a delta function for the distribution with the angular position of that delta function which is taken to see that this is taken by the argument of z and if z is somewhere between zero and the molecule is one so between zero and one so then I get such a distribution which is something between homogenous distribution and the delta distribution for the probabilists there's recently also a nice theory by Dennis Goldowmin from Perrin who showed that this can be seen as a general principle of what he called circular cumulants so generalizing the concept of cumulants to distributions on the circle very nice okay so here is the local order programming example with common quantity the angle theta has disappeared and this nice equation I see I would have bothered you with the details just let me draw your attention to the fact that these partially coherent states which we are interested in that they are actually given by rotating solutions with a uniform rotation frequency capital omega and the fixed profile with respect to alpha this profile and with respect to omega so this profile a of omega so if we want to find a solution of this type we have to find this collective frequency omega and this profile it can be on the natural frequency omega this profile a and it turns out this profile a has a universal solution which is given by that formula that originally is just from solving this problem from no one very difficult and the main point is that it contains so to say two parameters on the parameter is this collective frequency capital omega and the other parameter is p that kind of tells you how much the synchrony is in the solution okay so you can plug all that in and you get in a sense a version of Ramoto's self-consistency equation in the kind of bit more advanced form in this first formula so this is an improper equal but never mind and just take that as kind of a bifurcation equation which relates the solution parameters p and omega to the system parameters k the coupling space and alpha the second region and now use that as a starting point for for the doing verification there are two main points that you can immediately extract from that the first thing is if this p goes to zero that means just you go along your solution branch to complete coherence of p the limit p 10 into zero actually you see that in this formula this is for two levels of respect to p so there's some pecking kind of p behind but morally p 10 into zero just is this obviously the initial special values and this determinant just using this partial derivative of h can give you force of the bifurcating branch of partial coherent states yes so they should be not talked by an issue without a serum in the first line I just wrote down the linearization of the autoacoustic equation this is an illusion equation so you just have to linearize the left right hand side it's right and of course contains these integral operators so the the linearization is not just as it would be in all the uses and just to coordinate the states but these are operators and bond spaces and one can find that this comes in a very specific form that is from lower dynamics you get a multiplication operator multiplication operators generate continuous spectrum essential spectrum and there is a compact integral operator which is a very nice behavior sitting there comes with a coupling term and the main message is linearizations of that type can have two types of spectrum they can have point spectrum which is similar to matrix spectrum and continuous spectrum which is rather not similar to matrix spectrum but maybe similar to things that some people with a physics background know from and there's a way to directly calculate the point spectrum I will not go into the details of this for instead I show you an example maybe you first show for the graph on the right hand side this is now a kind of alternative version of this classical onset of synchronization scenario with a subcritical instability so you'll see the black line at zero level this is the state with complete incoherence then you see a roughly warm one the so to say coromoto special where complete incoherence loses stability and in principle there's an onset of partial coherence but at that point this branch of partial coherence states by the case subcritical and that means there's a range for existence of complete incoherence and the partial synchronized state and these purple in-inset graphs just show the spectrum of the corresponding solutions there are these lines which are like on the axis or t-shaped in some cases this is the continuous spectrum and the black dots which disappoints so on the left hand side you see a two-parallel bifurcation diagram so the right hand side diagram most is very okay along the axis and fix alpha at this value here indicated by a dashed line and so the left side diagram gives you now the same bifurcation diagram these two parameters there's red b2 which just shows where the serial solution completely creates same and the blue curve shows you this fold where the branch of partial coherence solution folds over for being same and that example was obtained just for a kind of the most starting from the most simple thing which is the Gaussian frequency this would be but just truncating the the tense what you could do if you do the calculations okay so that means already in this simple example of a truncated Gaussian you see non-standard synchronization of this subcritical time but only if you choose the alpha sufficiently large if you are with the alpha of those zero images so here comes a last example which is particularly interesting and this is just the superposition of two Gaussians with different widths so think about a mixed population one part of the population has a rather a frequency with a rather strong inhomogeneity which is the the the widths semi-frozen and another part of the population is actually has a rather small inhomogeneity of the frequencies and they are just coupled together in a global fashion and there you see now all these different scenarios that are already depicted on one of the first slides here you see the two parameter application diagram again with alpha and k while this is just the onset of synchrony or increase in k for different choices now of alpha so here you see again the this this reddish region with same incoherence and whenever a horizontal vertical line intersects twice then you have these in fact of incoherence we gain stability for increasing cognitive stress and then you have this new line of fold by application this is whenever the branch of partially coherent solutions forms whenever such a dash line intersects a blue line then by an et cetera full line here you see some qualitative explanations how one can understand this counterintuitive behavior that increasing coupling strength leads to less coherence and so in the uppermost figure you see kind of the normal scenario and then I don't know whether this is well visible but there's not only the black line which is the global order parameter but there is also uh single lines which indicate that all the parameters of the subpopulation and then you see that this is the single line at the current threshold which is relatively low k because we have this subpopulation with only the small energy needed that they already synchronize and stay synchronized all over and then somewhere in the region where you uh expect the synchronization threshold for this subpopulation also the other order parameters start to grow and ever see this much this is for small however for bigger alpha you see the following you see that whenever the blue population has already reached a nice level of synchrony and then the purple population kind of starts to increase in synchronization then it sort of pushes down the synchrony of the blue population and this is because due to the alpha parameter the frequency of the of the synchrony shifts with respect to the central frequency of the distribution so here you see these windows of synchronized frequencies so all oscillators these natural frequencies in the blue window are synchronized and they are getting more but at the same time the window gets shifted such that the peak of the blue population falls out of the window of synchrony and in this way kind of the synchronization of the purple guys suppresses can effectively suppress the synchronization of the blue guys and in this way the synchronization may break down again with increase I don't know how much time I do have but I think I will briefly also tell you something about spatial system this is now the same as the sacrocoge system that I showed before but now not this global coupling but this coupling weights GKJ which are just what some people call non-local coupling so we think about the oscillators being in the one array and we have a distance dependent coupling so next and second next neighbors are coupled to some oscillators but only with this time and distance and the coupling strength somehow the case resists and we have again inhomogeneous frequencies omega-j and what you see again is if you just look at the global order parameter that for small alpha you have seen something that reminds you to discuss particular relaxation scenario what if your feeling well is of alpha some strange things happen here you first see that somehow the maximum of you and the minimum of you with respect to space does not coincide anymore but only very slightly and here you see that there are big discrepancies but then you come back to a state which again looks more on this as it is for the classical so let me again skip the details and just tell you what happens there is of course again a thermo-structural so the zero-solution of stability but since this is now a spatial expansion system this is not just the usual also of synchronization but this is a kind of a purely light instability which is central wave number zero you get the egghouse-like scenario where solutions with different wave numbers emerge they are depicted here so this is just space this is the kind of the spatial oscillates and for the wave number zero you have just this kind of homogeneously certain partially synchronized state and for the other wave numbers you get these states which some people call twisted states and the thing that is an important influence of the alpha parameter namely instead of the classical egghouse scenario that you just get when a spatially homogeneous solution bifurcates you get here a scenario where an intermediate regime of chaos appears so in these two bifurcation diagrams you see again alpha versus k so similar as we had it before and you see that the blue region is where you have the same trivial solution but the light blue region is where you have to stay with wave solutions or twisted solutions but for large alpha there is this in-between region and in this in-between region you can see kind of a non-trivial collective phenomena which can be either amplitude chaos which you see here so this is extensive chaos where not only the faces fluctuate but also the amplitude so you see amplitude while in the other two pictures you just face chaos where the amplitude is on the other side so let me conclude with some general remarks of course in foundations fundamental collective behavior in a couple of oscillators systems and there can be many interesting phenomena that are observed already in these very simple programmable face oscillators systems which are valid in the recut in the machine. General oscillators state if you look at the discrepancy of this in my last ensemble this non-buckle coupling can generate qualitative new and interesting collective dynamics I have not talked today about chimeras states which are nothing something of both the new collective dynamics this will be and the properties of these face response function which in Kuramoto's case was just the sine and the Kuramoto-Zakabuchi case was the sine with the alpha inside are very important and in particular this alpha parameter that can be actually crucial for the dynamics and already some of this is kind of the main ingredient to see this on the film. Thank you for your attention. Yeah I'm ready to take your questions. It's a technical question so when we introduced the alpha parameter you used the gradient structure of the coupling so it's surprising that the eigenvalues stay real. Any any ideas? Yes I've seen the same in the final and the non-PD version of the Kuramoto model I can never explain that to myself. Yeah well they stay real until we reach the answer of some kind then we get these t-shaped spectra and the whole spectrum on the imaginary axis. Yes you wouldn't expect it right that we throw them also in terms of the breaks the gradient structure the eigenvalues stay real. Yeah yeah. Other questions? If you have alpha you observe the coexistence of two different actionless synchronized states that you have some back intuitive explanation for this what kind of states we have or difficult? Yes I mean the kind of explanation that I do have is just coming back to these windows of synchronization within the frequency distribution and that you can so you either have a small window which is not so far shifted or a larger window which is further shifted due to the alpha and those could coexist. But again I mean I see for me the main intuition is this suppression mechanism where kind of the way you shift and the peak outside of your synchronization window and this actually destabilizes the domain problems and gives rise to second. So the frequency on which the system synchronizes different problems that mean to bring it. Right. Have you ever looked at the noise also in your system? So here we just used the quench this order in the natural frequency and did not include time dependent noise. Some of the states can actually be done also for noise assistance actually the what Anderson gets more difficult but the phenomena are I would say mostly similar. Okay you. And there are more numerical based studies that you also have both like non-uniform frequencies. A question about Anderson so it somewhat parameterizes solutions with low complexity. Are you sure that you capture all solutions without ansatz or could it be that you missing some? Yes so the solution that's there are some equation does not describe the full set of solutions to the continuity equation but it somehow so this is a very meaningful so when we start we are missing many for this state there and in most cases it is at least locally stable that means it just starts in the label of this all out and so many for that sooner or later the solution will come to this. So this is this that means it should capture. So if there was a question, let me come back to my role as chairman again and close the session and take your coming in.
|
We investigate the synchronization transitions in systems of coupled Kuramoto-Sakaguchi phase oscillators. We show that in globally coupled systems with certain unimodal frequency distributions, there can appear unusual types of synchrony transitions, where synchrony can decay with increasing coupling, incoherence can regain stability for increasing coupling, or multistability between partially synchronized states and/or the incoherent state can appear. In one-dimensional arrays of oscillators with non-local coupling one can observe at the onset of synchrony the emergence of collective macroscopic chaos as an intermediate stage between complete incoherence and stable partially coherent plane waves. In both cases, the phase lag in the interaction function plays an important role for the observed phenomena.
|
10.5446/57511 (DOI)
|
Okay, yeah, thank you very much for the introduction. As I had almost no idea who would be in the audience I tried to keep this as kind of a general overview talk. I may, if we jump into details at some point I may decide to dump some of the examples, but we'll see just let me know once I run into critical time issues. So, talking about semiconductor nanostructures, we have a whole field of different nanostructures that are of technological interest that can be thin films that are used for efficient light emission and detection, ranging from infrared to ultraviolet spectrum. People have been using quantum dots as artificial atoms due to their well defined energy spectra, potentially suited for single photon emission with application in light emitters quantum computing or cryptography. And people are also interested in nano wires as you can see here on this figure, due to their free site facets that facilitate elastic relaxation have a large surface to volume ratio and show some very unique material properties that you would not observe in a planar or a bulk system. And that said there is a number of very different systems of interest and correspondingly, we need simulation tools that are both reliable and efficient. They come with a high accuracy should be of course, once we are doing numerics computationally efficient robust, easy to use always something of importance. And if I can close that my own window down here I have no idea about that because it's part of the screen missing now. And they should come with a manageable number of parameters and output data to make sure we don't run into some stuff that is a mirror parameter fitting just to to experiment. So, basically we can distinguish these models into two fundamental approaches one is a stick modeling like empirical tight finding the empirical pseudo potential method or density functional theory. And the other is the same as the other approaches or atomic approaches come with single atom precision they are extremely highly flexible of course for any kind of mono structure that you may be interested in. But on the other hand, it can be very difficult to set up these simulations, and it is at hand that these these models become computationally more and more demanding once we go for larger structures. And so, these are the most continuous models, and the more generalized version of K dot P models that I will be talking about in a minute. These models neglect the underlying atomistic system they are also highly flexible. And the clear plus is that the computational effort is independent of the number of atoms in the system simply because there are no atoms it's basically the bulk materials that we have and you can already estimate that we are talking about a few tens of millions of atoms in such a system and that's clearly. It's not totally beyond the capabilities of atomistic approaches but it's something that you don't want to do on everyday basis. So, the basic idea of effective mass and K dot P models is what you want to know is the energy and the wave function of electrons and holds in some crystal or a structure. We basically start with the Schrodinger equation here and we take we as a periodic potential with the periodicity of the crystal letters. And then we can basically split the wave function using blocks, theorem into some highly oscillating plane wave and periodic function. And that basically allows us to rephrase the a perturbative Hamiltonian, which is this. And then we will just basically the perturbation theory to this method that we are using here. And then there's a second perturbation coming in namely the fact that we limit ourselves to a number of bands that are of interest, for instance here for example of a guy awesome I bought bands that one could take only the bottom conduction bands and the top three way lens bands because they're basically close to each other, which is the idea of the eight band K dot P models so basically we take these four bands with this been up and spin down component, which gives us eight bands in total. And then the second approach is that we say okay, we have a perturbation in case so a perturbation means this approach is valid only for the vicinity around the gamma point or technically speaking also around every other high symmetry point that is chosen for the development model but typically limit themselves to gamma point. And let me note that this stage that is well possible to extend the region where the model as well as to larger and larger regions of the region zone by taking more bands into account. This is what people do. But basically what you will see mostly is this so called eight band K dot the model, which is limited to the bottom conduction band and the top three way lens bands. So it's such a typical K dot P Hamiltonian looks like. Basically, we have the interaction here from the terms from.학 omega maten, the terms for the action that has like terms and then that is the new terms for Wayland's work relates treaties, or terms copy. So basically our Hamiltonian that that one can use for the description of such a band structure and if we want to do this now not for bulk materials but for heterostructures we want to know where for instance the electron in a heterostructure sitting at what energy it has a hole. So this can be combined with an envelope function approach so we make the effective masses potentials, the coupling parameters which we hear points. You can make one of the station. And what else can enter our contributions like strain that can be easily computed via continuum elasticity theory but also why as an optimistic model if you wish it can always be easily integrated. And also appears up here so electric potentials there may be external potentials or maybe charges that jump in and out and what is at some point, and also exotonic effects can be contained via self consistency. So this eight band K dot P model to date still represents the backbone of modern device, similar to the device modeling. It is computationally inexpensive as a small number of material parameters, most of which are well known. In most cases, not at all. This surprisingly reliable even for small structures despite being limited to small K where you use with correspond to larger structures. So it's straightforward combined with linear elasticity theory to incorporate strain and build an electrostatic potentials. And it's also quite straightforward to incorporate turn or ordinary or alternately alloys or alterations of crystal phases by parameter interpolation. And there are these disadvantages so it neglects the optimistic setup of the head of structure, certain symmetries can be simplified in an eight band K dot P model that would be resolved correctly in for instance type binding model. The limitation to one conduction and three rail and spends can be critical at some point as we will see. And it's inaccurate for K where you send the auto brilliant zone for instance, if we would go for a silicon germanium eight band K dot P as it is might not be the best choice because we basically have an indirect and get material where other well is become important. There are a number of software packages existing, for instance, a type of cat and next nano, which can be used. And this formalism has been applied to a really really wide field of very different materials and heterostructures quantum dots, nano wires nano plate lets you see these publications date back to the late 90s so it's something that has been in use for more than two decades now and still is news. What we are using is a plane wave based implementation. So we have linear electricity and multi band K dot P formalism implemented within the plane wave framework of the existing swing software package. And the neat thing about our approach is that we have basically a wire fast Fourier transformation we have all the real space properties of a heterostructure contained so we don't have to do any assumptions on symmetry and so on. And we can switch from reciprocate to real space and back that allows us to make use of very simple and efficient gradient representations in reciprocate space. And we could make use of highly optimized minimization routines that were already available in the original DFT package swings and the kinetic energy is a very very direct convergence criteria and so we can basically look at energies and our particles at the site whether they are converged or not. And one very big disadvantage we are using a plane wave code and that said we have periodic boundary conditions intrinsic. That's no problem for electronic states but it can be quite severe for this long range of directions like frame and built and potentials and correspondingly at some stage we might need larger super cells for our computations. And what what makes our code I think very unique is that we have generalized it to a wide extent. So we are not limited to any materials or compositions or shapes of the heterostructures or whatsoever, because all this is basically saved in a three dimensional map. We can apply also in the linear elasticity module arbitrary elastic tensors and polarizations which come can come in for instance for certain crystal orientations that are non standard. And that allow you to apply a symmetry suited to the system and not to set up a cell because you somehow have to describe the symmetry in a huge unit cell. And in particular we have a generalized Hamiltonian which allows us to control the level of sophistication so we can basically we can do a single band effective mass models we can do six and eight band K dot D we can go for more bands we have done for his own models for Silicon Germanium for instance, or this can be done. The Hamiltonian is basically something which the user can bring on his own. And we have an open source and we have no investor sometime to make it installable per package manager on most common Linux systems. It seems to work pretty well so far there are from from the users we have there's not too much complaints yet and if there are then we are happy to take care of this. And let me know that we have recently given a software tutorial on the usage of this code in March, April this year. So you can basically imagine the workflow so we start with setting up some three dimensional material map that contains basically the shape of the system and other compositions in the system. And also we need material parameters that contain the primary parameters. So if you're not concerned then we just compute bye a building on the inside in theory, the strain and to the strain. And search on the possess goal what cook through the like best potential. The set of systems the moreReally the set consistency moves of air space in some, some between continuing the business even the business is not contained in the core software. It's one of the most but it's not really on a permanent decision I have to say. And once we know it's mainly going to this moving back area, we're going to take people around one, two, around this next class, then offsets, probably around this. And we had also 50 sprouting, there's two bands and then you can see that the property can be used in set consistency or in the next one properties or some other direction you can also use the larger enemies if you then the optical spectrum of the system. So the results of our simulations are then strain distributions developed in potentials, single particle energies and wave functions, charge densities, possibly excitonic contributions and hopefully in the end some optical spectrum. There's a list of example studies that we have done with our code. I will limit myself to a few of them within the next minutes to just give you a flavor of what can be done with our software and what we are also very open for for anyone who's interested for some other information. So first example would be this quaternary alloy indium arsenide antimony phosphate, graded composition quantum dots grown by a colleague from from Yerevan using liquid trace ecotexy so that's a very, very ancient method but surprisingly they see some in fact 3D hetero structures applications would be in particular in the infrared for energy harvesting and gas sensing and so on. And what is very interesting in this LPE grown systems that they exhibit such a composition profile so basically you will see the amount of content decreasing and the amount of D increasing over such a what would be looking like a cone. And that's something very interesting which does typically not occur in the typical MBE or MOCVD systems. We have some information also thanks to collaboration with Institute for Crystal Girls and the Institute Berlin. We have some information on the height and diameter distribution of our systems. And our question was now can we provide a reliable model for such nano structures for theory guided design process so can we tell our experimentalists to make a system smaller, larger, do something to them, to get them into some certain range that is more of interest for you. And basically what we started up with was a single particle calculation for this relevant range of heights and diameters that they have seen. So we have basically set up such a conical shaped system. Used an 8-band K.p model for simple and semiconductor. It's pretty much the Hamiltonian that you have seen before with some simpler symmetries. In fact, so the inside here shows you basically how the system looks like. The composition range in the US, the whole state, it's a type 2 system so we only expect the whole state to be localized in this system, which would be something suited for some detectors for instance. So there are no exotonic properties here because the electron is kind of far away. We also applied some gap corrections to our transition energies that we have computed here for different heights of the system and for different diameters of these environments. This is basically what experiments have seen. And then we have simulated an ensemble spectrum by basically taking the diameter distribution from experiment and plugging this onto our energies for the correspondingly computed systems. And we have seen now this is really kind of disappointing that this picture is here because the emission peak that we have simulated was at 3.829 microns, whereas the one on experiment was at 3.83 so I would consider that quite good, surprisingly good agreement at that point. Really you have to believe it's really 3.829. So it's really close. So next example would be on the influence of random dopant fluctuations in three nitride nanowires. So the advantages of nitride nanowires is that in particular of indium-gallium nitrides is that they would potentially allow to access the whole visible spectrum by only adjusting the indium content of these insertions here but theoretically and another advantage is that they have a very large surface volume ratio which is perfect for light emission. The problem is, as I said, theoretically because what happens if you plug in more and more indium is that you increase the strain, you increase built-in fields and basically it's quite difficult to get light emission from indium-gallium nitride heterostructures and gallium nitride with high indium contents. The basic charge confining mechanism in such a system, of course, the bulk bend offsets, polarization potential that occur and there's a very, very large potential barrier at the site facets of such a wire. So order of a few electron volts, the work function and the activation energy plus band cap for the pulse rate. There is significant elastic relaxation that's very unique in these wires because they can relax their space for relaxation here. And what also happens is that you are always close to some site fesset to some surface and that means there are surface potentials emerging from ionized dopants that are of interest for us. What else happens is that you observe sometimes alterations of the crystal phase and other effects for very thin wires that come into account might be dielectric confinement because you have a very, very steep change of the dielectric of the material and the outside which would typically be a vacuum. So what we had a look at and we'll have a look at now is the effect of surface potentials that emerge from ionized dopants. So what our colleagues in the experiment have observed is that with higher indium content in the active layer of such a nano wire you increase the photo luminescence intensity. That's pretty much the opposite of what would happen in a planar system where more indium content means higher defect density, means stronger potentials that pull electrons and holds away from each other. But what they have observed is higher indium content and also larger thickness of this disc and they see more intensity but something quite unusual. We could basically answer this in a first approach looking at surface potentials that we assume okay, there is a number of dopants in the system, ionized dopants we have assumed first that they are homogeneously distributed and that just rise to a surface potential. The surface potential is attractive for the whole states, for the blue ones here at the side. So what happens is we basically get an interplay between the surface potential and the polarization potential which would increase the indium content and puts the whole closer and closer back to the center of the axis and therefore closer and closer into the electron keeping in mind that these things are to scale so the diameter is about 80 nano and this is about 1 to 10 nano meters that we have taken into account here. And this interplay between polarization and surface potential has in fact been useful to explain the reduction of the photo luminescence intensity that my colleagues have observed here for smaller indium contents or layer thicknesses. Now the problem is if you assume okay, we have nano wire of 80 nano meters, a dope intensity of maybe 10 to the 17 cubic centimeters and take a segment of 20 nano meters length. We have an average of 8 dopants in the system and that's quite far away from the assumption of a homogeneous doping charge density here. So what one would have to do instead is to consider individual randomly distributed donors in nano wire. Problem is doping is an atomistic feature as we are using a continuum approach here and I told you that there are no atomistic effects in a continuum approach. But on the other hand, so we decided we stick to our continuum model because we have a system that is not available. So we have a system of under consideration has about 7 million atoms. The typical donors that we would observe the unintentional doping that's always basically there are silicon and oxygen and both of them represent shallow donors. shallow donor means that they are well described via simply their coolant potential and what we have here is that the donor means that they basically provide additional potential to the wires. And then we have basically started rolling dice so this is what it looks like. The donor here is very much very much very much of these potential keys here and you see the electron and all are kind of another regime of these donors and then we have started to do some statistics and discuss the ensemble. So we have picked a handful of typical model configurations with some fixed indium contents or some fixed thicknesses of this insertion we have always had the diameter of the wire of 8 nanomers and around too many parameters at the same time. And what we have observed is that the variation of the emission wavelength is surprisingly more or less unaffected by the choice of indium content and thickness of this layer. What's also a side issue is that the energies that we observe are always smaller than what we assume are homogeneous. It's not too surprising that both these distributed donors give us deeper potentials and the transition energy buildup and of course they are also smaller than the dashed line here what we assume that will be in the world. And we have in fact been able to start the line of the randomly distributed donors of approximately 150 milli-electron volts. This is very very bad news for all those people thinking that you could use these systems for producing reproducible light emission because taking exactly the same indium content in yourself taking exactly the same wire diameter with the same layer thickness you still have 150 milli-electron volts of line width due to the fact that you have no idea where your dopant is going. So what else we had a look at the ensemble charge densities here. So the red is the electron again, the blue is the hole, the average hole looks in very good agreement with the assumption of a homogeneous charge but this is not as it looks like so the typical hole you will find will be on these corners of course sitting or six of them but the typical hole is sitting in one of these corners distributed like in the case of the homogeneous doping and in particular the electron localization is governed strongly by the dopant and you see there are very very strong variations we have done here 500 simulations and you see it's not even near converging so that's really the electron is all over the place determined by the electron. So third example that I will talk about is polytypism. Polytypism is something that occurs regularly in wires so you see here this is a gallium arsenide wire you see word side segments in a mostly zinc blender phase wire and you see also zinc and wire phases in mostly word side structures and that basically allows you to form crystal phase nanostructures crystal phase nanostructures so you have a nanostructure without changing the chemical composition of the material these systems are almost three of strength so they are very very well let this match because again it's the same chemical material it's just in a slightly different crystal phase there are of course no alloy fluctuations and what is particularly interesting is that they form atomically flat interfaces that's something highly interesting and the question is how can we model these systems and there are some problems coming with such crystal phase heterostructures so we have here changed to gallium arsenide which is the more interesting material you see on the left side the band structure of zinc blend gallium arsenide and on the right side you see the band structure of wood, like gallium arsenide and there is something nasty happening in the years of gamma-A and additional conduction band coming into play which is energy extremely close to the conduction band which is the equivalent which is typically contained in an 8 band K.p model and to make the situation even worse we have already seen similar plots in the last talk where such a process for a valence ends in small strains and change the character of our production and these strains even though they may not directly appear from this crystal phase lattice mismatch they can appear because typically the wires that you look at are distributed on some surface and therefore they will see interaction by the surface and they will see strain by the surface and therefore plus minus 1% strain can easily happen and so far the question which is the lowest conduction band is not answered and the question how this goes into play is also not answered and the main problem is that the common models our backbone of device model in the 8 band K.p model is limited to only one conduction band so this is simply not contained in the 8 band K.p model the answer to this is quite simple so we take an 8 band model and just add 10 more bands, that's something we can do in our code because basically the Hamiltonian is some input that the user can change and there is no problem between the bands they are technically speaking one that is not a code for them but it's very very small so this is between the 8 band and the top way in this band here it will play a role for optical recombination if there is no other part available which can happen and this band is not existing in the Zincland phrasal and therefore basically is a very very high potential for the inside the wood side phase and the one that has a Zincland has a Zincland wood side they have very different effective masses and the question is what happens if we go to superlattices first or if we move on to wires which I won't present in this talk today and our first step was to take a superlattice of about 14 nanometers length because it's interesting of a Zincland and a wood side segment and the one information that is always through the hole will be combined in wood sides and what is going on behind me it has a very heavy effective mass so it's typically well localized however for the electron what can happen is that it's either confined by this gamma-6 band in the Zincland phase or if I have a very very thin Zincland and a very thick wood side segment it will jump to the wood side phase where it's well confined due to the much heavier effective mass it has in the wood side phase and if this happens or not basically depends on the thickness of the segments and also on the strain state and due to the fact that there is a small but nonzero matrix element between the gamma-8 conduction band and the top wavelength band if there are no other paths available this will be the recombination that will happen at some time with some larger lifetimes and so on but something that can in fact be seen and we found that this can in fact potentially explain very very contradicting measurements that have been observed recently on the effective masses of electrons in the wood side phase of Gaia-Marsonite so it's something which is not really good not really simple to verify correctly but it's some potential explanation okay do I have time for this point or shall I okay okay so as already mentioned in the beginning for my fourth example now which is also part of our portfolio what we can do taking a K.p model the more bands you take the more accurate your band structure gets even to the fact that you can infect your full zone models for silicon germanium and so on so that basically shows you how this K.p model is even effective masses before the A.band is going to use your parabola and of course this case where I have time to spend on us the 6th band model is a in the action of 3 bands the 8th band model will get that description more bands and more infections 14 band model here for instance and you see how this structure gets more and more detailed and we can see that we can see that the band structure is not as common but sometimes not sufficient as we have seen for instance for polytypism for indirect band gap materials silicon germanium also for something which is called bike inversion and isotrotropies so basically taking into account that you have in fact a different symmetry that is not contained in this 8 band model sometimes additional high symmetry points become important for the 8 band model the question is quite simple answer there is something which I would consider the bible as a whole government paper has the emulating K.p models has been became across this work the state of today has 5477 citations brings over a thousand references so they have really driven an enormous effort in digging all these K.p parameters for 3.5 materials for the whole bunch of 3.5 materials for binary ternary and ordinary alloys selected and justified why they have taken a certain selection for some parameters so it's really been an enormous effort but it is limited to 8 bands and sometimes well you see that may not be enough and also there are other materials for instance these new crystal faces for some materials that are not contained here the 2.6 materials which are not there and sometimes more accurate parameters are simply available and it's 20 years old paper so we have seen this here for the wood side phase you can basically treat what we have done here take one additional conduction band into account you can take the 8 band model with some hybrid bands which simply gives you the wrong band coupling because it's not contained but if you want to have a pattern model of this additional conduction band you need more bands above that are not directly of interest but give you the more accurate structure of this new band in there and what our next idea was to take a 16 band model 3-ray lens bands one conduction band as we had before one more conduction band and 3 more like p-like conduction bands with their spin up and spin down components for geomarsenite in the wood side phase there's exactly nothing existing so far and that said we need some fitting procedure to fit all the two existing band structures that you can compute from DFT the possible fitting schemes available so you have a comparatively large parameter space in the wood side phase that would be 25 parameters that you cannot directly read from the band structure problem is with band structures and fitting band structures you definitely will end up in finding local minima you use some gradient approach you've seen all these valleys and come up with some first guess you will definitely have crossing somewhere that is not resolved anymore working with a grid wise search interval in 25 dimensions will take a while so certainly not what you would want to do and what our idea was then to use some crossing with the color approach so basically this is now the example for two dimensional parameter space you could either randomly distribute points in the parameter space then fit this whole range to the parameters that you are interested in and simply find the k.p model for the whole band structure and see what fits best and the problem is in random sampling you have some areas where you have clustering of parameters some of us are totally empty so you have a high chance of missing the best set of parameters and our alternative was some low discrepancy points sets here the soboil sequence which distributes in an arbitrary number of dimensions these parameter sets and that was basically what we started up with so basically we set up such a soboil sequence of for instance a number of 3000 let's say we take 3000 different points sets with 12 dimensions or with the 12 we mapped the soboil sequence 12 dimensional then to a predefined parameter range which means we need at least to have some idea where our parameters are sitting that's a shortcoming of the current method then we basically simply find out okay what Hamiltonian goes throughout the whole area is that we have throughout the point of density both and simply take our initial band structure and the eigenvalues from this parameter set and the corresponding k.p Hamiltonian and find the minimum value for it we can then start applying some priorities to certain k points or to certain bands that are of interest we have benchmarked this for a 14 band model for Bayon Arsonite in the Zinc band phase where we knew the answer and have just started with a more up to date HSE 6 band structure and the total timing for a good fit that came about with better results than they had was a bit more than two minutes single CPU performance we have added some more optimization on this so basically you start finding the best parameter set in some parameter space and if you see it's at the very site of the parameter space then you move the parameter space a bit then once you're converged go for some interval search to make the parameter space smaller there's still room for improvement so that's a very very new feature here's what it looks like for Woodside Bayon Arsonite so red is the density functional theory blue where the initial size is the size of the model and black is our fit black and gray where black was basically the interest for us it cannot be perfect because again it's a model so it would be surprising but it's looking already quite good. Okay with that let me summarize so I have presented you a widely generalized K.p model that we have been using it's generalized in terms of material composition, shapes, dimensions and in particular the Hamiltonian is in the sense that we can control the level of sophistication versus the computational costs that are of course coming with a more detailed model we are making use of a plane wave implementation within the existing Sphinx library and we also have continuum elasticity theory module to account for strain and build and potentials that are currently combined with our simulations the example studies I have presented you were on ternary indium arsenide and demonite phosphate graded composition quantum dots I have talked about random dopant fluctuations in 3.5 nanowire heterostructures polytypism in gallium arsenide and parameter fitting models for anything beyond the classical 8.2 scheme and short outlook so what we will be interested in now to apply our model to maybe silicon germanium heterostructures in collaboration with the Institute for crystal growth Berlin we are looking at exotonic effects currently in 3.5 nanowires with random alloy fluctuations which can also be surprisingly well described in a K.p model and also I have shown you this very novel parameter fitting scheme there is clearly room for improvement and we are working together with Fumvoit University Berlin group of Andrej Avada to see if we can maybe improve this fitting and with this thank you very much for your attention for the fitting algorithm what kind do you use it's only I mean if you have I mean if I understand it correctly you have your DFT bands or energy bands and then you have your solutions from your from a Hamiltonian for a different parameter set and then you can put some weight on the parameters and then you minimize this kind of that you have yes that's the cost functional just okay and the fitting is only we have this so both sequence we have like I don't know 3000 parameter sets and we pick the one with the best fit so there is no gradient search on that nowhere so if I don't have enough points in my so both sequence I won't find a good solution it's really there there is no gradients anywhere it's just I take a big box of parameters and take the best one okay so it's not that you change the parameters and then compare them again to the DFT no no what what we do have in mind and what is currently in development with Humbart University is to on top of that to plug in some gradient search so if we are already close to the minimum it would be good to move on with the gradient search and what may also help is we have a number of parameter sets in the end that give us comparatively good fit so it kind of sounds reasonable to start with some genetic algorithm on top of that to take a handful of parameters from this one and the handful of parameters from that then maybe you randomize a bit mix them and see if this gets better yeah but at the moment as I said there's room for improvement yeah can you kind of improve it in the sense that I mean when you start from a six band or the eight band you already know some of the let's say material parameters can you let's say tighten them a little bit to say that variations on them for instance the lateral parameters should be too large compared to the parameters this was in fact our hope so that's basically how we started with all that when we developed this 16 band model we said okay it's basically two eight band models and you can fit you can fit the parameters for one eight band model in Mathematica the solve the end solve function we'll do that in a second the same for the upper eight bands and then you switch on the coupling and that's it Mathematica won't do anything on that so starting with what we know is maybe helpful but don't expect that the looking at parameters are only modified to a small extent that's what I would expect but this is not the case not our priority so this can really this coupling can totally change your parameters so knowing the parameters from the lower bands does not necessarily mean that you have a good guess for the coupling I mean if you're talking about the typical three lateral parameters for the for the Zieg-Bennelette right it's a material property yes yes how can it be but then you have this classical loading up parameters and then you have the modified loading up parameters which come in with this modification with cane coupling and that's only for the conduction band so of course your loading up parameters may not be modified at all but the modified ones definitely will be and the question is if you want to keep this in having the unmodified parameters and having the modification that then affects the effect of masses somewhere else between all these bands or if you modify the parameters themselves and this is really a critical point so no matter if you have this in writing down new loading up parameters or new modified loading up parameters the problem remains that the change can be severe okay so in order for let's say transparency to better compare between 6, 8, 14 band models I would rather let's say put the changes into the modified parameters that's as much of the commonly known parameters more or less unchanged but this can come with also known cane parameters these optical matrix parameters that they deviate completely from what you would expect from the experiment so it really is that's for sure okay we are all straced so another question yeah basically maybe you try the effect of the violence band yes so is your software in probably targeting violence bands or not or just provide an answer what you mean so basically we have the wayland span landscape that goes in somewhere it's computed basically from white properties from strain distributions which can be three-dimensional from external potentials from polarization potentials and then you are holding back the table and the third-year time of consuming at your own level at every part of the space you can get really the coupling between how to continue when they drop there is another servered distribution here well basically that's this looping Hamiltonian basically this is the six-band model which comes with all these diagonal elements and what you can see is that the wave function of the hole will simply change its character and this is not happening at all so it will come in with more and more light hole and heavy hole contribution so we can see from the simulation how it comes from they do cross at some point but this is something you can see there is no crossing like this but you see that a light hole takes over and heavy hole takes over another position so you can follow the path of the single banded by using the same way yes not to be a jay and very bad 1 ok ok
|
Semiconductor heterostructures represent key ingredients for application in novel light emitters and detectors, energy harvesting, or quantum technology. Numerical simulations of the optoelectronic properties of heterostructures such as thin films, nanowires, and quantum dots facilitate both a detailed and systematic understanding of observations from experiment as well as theory-guided design of nanostructures such that they fulfill the requirements of a specific application. We will provide an overview of our modelling capabilities of the electronic properties of semiconductor heterostructures using continuum-based multiband k.p models, implemented within the plane-wave framework of the SPHInX library [1,2]. We illustrate the applicability of our approach by showcasing some recent example studies [3]. 1: sxrepo.mpie.de 2: S. Boeck et al., Computer Phys. Commun. 182, 543 (2011) 3: O. Marquardt, Comp. Mat. Sci. 194, 110318 (2021)
|
10.5446/57512 (DOI)
|
You Yes, thank you. Yeah, so it was two parts. We heard about the properties of the collect properties of materials or better to say are the numerical calculations of properties. And there was already some questions okay how they are grown. Okay, so now I want to speak how they are grown and also I want to speak in particular about modeling, and this is kinetic Monte Carlo simulation for understanding the epitaxial grows. And the outline is okay, some words and epitaxial grows in general, when the main topic, the KMC I also want to strongly describe the basics of KMC. And I will show two examples, which were calculated as the epitaxy of gallium oxide homo epitaxy, and the other is many also homo epitaxies or no strain. Except if you use aluminum gallium light right but this is not really was not really the topic of the strain and this computational in this research. So, and then I am some up there epitaxy grows in principle you have experimentally three different methods to do, or a deposition of your material on a surface, that is, either you use molecular beam epitaxy. Basically you have a source of your atomic ingredients. And so you put your atoms, while your atoms going to the surface. And you need there typically a low pressure. And because the atoms have to travel, or it's to the surface. It's a quite good method for fundamental investigations. If you go to industrial production, it's typically not used because it's not so scalable. You use this metal organic vapor face epitaxy so you provide the metals as gallium or aluminum by organic compared and a solution. And then you transported, we are gas, typically argon into the system, which is typically quite good for scalability. Nevertheless, in detail, you often you don't understand the process but if it's work, then it's fine for the industry. I'm making another method that is a pulse laser deposition, where you have already this kind of, or let's say more complex materialist, or let's be aluminum nitride you already have aluminum nitride at the source. And then you make a laser application of this, you heated this one on, and then you have clusters of aluminum nitride arriving at the surface. The problem is you don't know how much the cluster is typically the nice thing is that when you have new material are then you can just use a source of that and hope that something of this is going to the surface and you get a layer of the material you want to mitigate. So I will just stick to the metal organic vapor face epitaxy because this is a method we use for that as we are also in Leibniz Institute are interested going then towards industry process in the end. So that if your flavor of the machine, I think I can have that yeah so this is a machine as this is an academic machine where you see the chamber which is well quite small here in this case. So of course industrial or application it's much larger but you also sees, see a lot of gas wells and the gas system here which is in the heart of this kind of method and you see the sketch how it is working. And then organic precursor so to get a net lane for instance, you have the urban as a target of the gas. And then, while you're doing this is solution here you in the bubble or you transport your metallic compound organic compound towards the surface. And of course, and all the case or another and cases you need either nitrogen or oxygen what I will present as the this is an example of the same oxide so you need an oxygen source. There's a pressure in this chamber as much as much higher or could be much higher than the MBE, which is quite good for such systems where you need an access of oxygen. And here in the bottom you see already, okay that might be quite complicated says precursor, and this is even a simple one, but the problem there with the systems of course. And the decision of this is not so well known where it happens, need to the surface or far away or. So that is some point which is the problem with this kind of simulation. In the end we use and they are expected only the gallium is arriving at the surface and the, the oxygen as a molecule for this case. And when you want to understand it in more detail are you should consider then the different processes which is, of course, adoption, then you have the diffusion on the surface. And you have often these option, and the case of gallon oxide you have gallium sub oxide which can these are. And then you want to start with a different effects what could happen that is, for instance, first, if you have a flat surface nucleation and 3D grows, that's typically not what, what you want to do in a different version, because you want to have a nice flat or well defined layers afterwards you want to go to the set gross mode. But even if you have a step grows mode, it might happen that you have in the end step branching which is also something which you want to avoid the all these things on what to like to study and we use an atomistic method for that. So, what does it mean. So, think about that we have a certain surface configuration of atoms on the surface. So, you think about it, new state, one atom is diffusing or one atom is absorbing, though you have a new surface configuration, and then you have a transition from one surface configuration to the other one with a probability which is given by the energy barrier for this process, and your surface temperature. So you don't have only one new configuration but but many many many so we have a lot of from the original initial configuration, there are many possibilities for a new configuration. And so you have a whole bunch of transition rates which are given by the probability times the frequency. So here is in the end in one iteration or time step, you have to pick up randomly one of these processes. What you do is you make a list of all this put potential processes. And then, and here are the rates are normalized rates because our random number is in the end between zero and one. And then you have a certain value of the random number. And then you see, you would pick up this green event, the event, which belongs to this green color. And then you have a very long vector indeed, if you have very many processes. And just to mention, the, when you do this event process, then there is a correlated time, which is given by independent random number. So, and then this is your time step so in every iteration, as in every step, your time might be completely different, or I might be different, depending on the, on the sum of all your events, or all your rates of the events. The problem from the moment point is that came see as a sequential process. That means, I cannot, I cannot paralyze it per se. And if I have a large computational domain, of course I have many transitions, it's just for for for this large computational domain. And then this is large and you already see on the, on the side the time step will become very small from the equation from the top so this is made makes this computational happy. And then we go bring a little bit more into the practice. So, do we have been the notes where the atoms, Residate, are every note has, of course, coordinate, which is fixed. So we have fixed letters. We have on the other side we have cells, which are the unit cells of this material we want to compute. And here is the unit cell of aluminum nitrate. We have four atoms as a tune, aluminum and two nitrogen nitrogen atoms in a unit cell. And now what we do numerically is we store in every for every cell we store all the events for the atoms in this cell. So one of these atoms can move or as on the site that can be your adoption, and also we make a list per cell are for all the events. And then he in the iteration loop. So, okay, depends where we start here. Okay, let's start here. So first of all, we have to make this list, the list for the cells, and only for the cells, which have to be updated. There's no change in the rest of the cell from the previous iteration. I have to do nothing. But okay, so when I have all this events per cell, then I have to create this long list is one vector from all the cells. So just make this makes this list of all of all cells. And then I have to see when as I show. And then I have to see what happened here to make a new list of the cells, which have to be updated so it's in principle it's very simple. Let's see here when you have a movement of particle from a to B, and you have an interaction sphere like like the gray cells so you have to update all the gray cells are with respect to the events happen in this, or I might happen in this. And this is not really great but you see the, the violet boundary on it. So now, so the question remains how we compute the energy the energy barriers and they were what does that is what defines your probability. So if you compare two possibilities typically you make bond counting or it better better to say a neighbor counting. You look for for instance here for the aluminum atom sitting here so it's a few are on the top. And this show for how many neighbors it has or if you need to. So this outer shell we have to look for the atoms. Here are nitrogen atoms sitting that was the reason to go up to this shell here to look for the neighbors, and then you'll have an input energy for this atom atom interaction, but I better to say it's not really an input for the binding between the two atoms that is an input. And then you have the total and a bond energy is just a sum. You can also use like an empty potentials for this are the one who was was presenting the MD is not here. Okay. So you can, you can compute the energy by the distance as a function of the distance. The problem, what typically occur for all problems is that the potentials are good for bulk. So the parameter parameter potentials but parameters, but often not not for this for the surface or that problem that you don't have the right energy so what what I show was everything was computed by the first approach, not by potentials. Yeah, and now we have the bond energies and there is of course we want to have the barrier so I just showed how we do it for the aluminum nitride. So, expect a diffusion from this nitrogen at oh sorry, from this nitrogen atom are from this position to that one. It might have the initial one such energy computed just by the other method, and the final one that one. So we have some difference here, plus a diffusion energy and we define that in a way that we define it fixed diffusion energy given for the situation that the initial and final energy is the same. And if it is not, then we then it is the energy barrier is computed by this expression. And the problem is then to get any the values from for for these energies on you can do it. Well, wait, and get it by DFT calculations but typically it's, you cannot get everything from that because it's a huge number and then DFT people would say oh that's taking very long time. And other ways to get it from experimental evidence. So this is, this is still the, the main problem for the current become Monte Carlo methods was a gallium oxide. There was no DFT calculation at all for this, or for diffusion on the surface. And we get it from some, some estimate for it from from experimental experiments and from a very simple theory for one D diffusion so one get some idea. So, what the, what an overall diffusion coefficient might be, but this gives you a flavor of gallium oxide, as it is in the experiment with it. So, the solution applies to have on a surface on a substrate surface of it to the position with this already much more complicated than aluminum nitride I will not go into the details of in total 20 atoms in it. So, we have a diffusion and eight gallium so quite complicated situation. So we consider here adsorption of gallium and oxygen molecule. We have the desorption as I mentioned, of the sub oxide. We have diffusion of the atoms and also we allow on this. This is a cluster of small cluster diffusion. So, this is the growth on a on a flat surface, but it would be considered like the first point nucleation and 3D grows. This dimension in the game C or the other direction. And you see what happened here there is growing. This nucleation to one cluster, but you also see, it's not really 3D grows. This is form and then the layer is filled up. And then the next one, new nucleation, you grows, this corresponds to the experimental evidence as I was also so observed an experiment that you don't have complete 3D grows, and also what we can qualitatively corresponds with experiment is the, is the size of the clusters as a function of the temperature of the surface temperature. And the size of course, what's more interesting is in what I say the step flow goes mode, because that is that what you want to achieve in the end for growing a nice layer. And okay, so in the numerical simulation, it really does work. You get a step growth mode, you see one step after the other is growing. And I know in the, in the numeric it's very easy to trigger the different parameters. So, for instance, a desorption rate, so we can manipulate the option right. And you see that you get a different structure if you load this option right, or a high desorption rate. So the video was for the middle, or a case. And you see the situation after 10 seconds, and after 20 seconds that looked like that. So here you have this nice phenomenon of step one step is faster than the other. And then you have a double step in the, in the end. And you also have here a button step up to this. But in between, it is, it's gross on the terrace, rather than a step launching. So the, you end up in the final picture would be the same, but the growth mechanism just from the came see would be different. And that's the difference between the two. In the experiment, you can also manipulate the desorption right by, by manipulating the pressure in the chamber. So one expects and the desorption rate was higher pressure is lower and vice versa. And then you see in between. And was a high pressure. So at night, they got this step punching structure in the middle. They're really got these step grows. So the message is here okay we can, qualitatively we can get the same result. And with the came see we can look much deeper into this system than just analyzing the final results from the experiments. So that's, so I'll let me know my tried. Well, it looks similar with the steps right here. It's different. It's much simpler, as you know, and just to mention, I put here always the characteristics why this materials of interest of elliom oxide is for high power electronics and here it's a little in the same direction. And I'm not here is of high interest. So I wanted to say, from the point of application, other D are, let's say the dynamics a little bit simpler. And another thing here, what I want to mention, what is known for for aluminum nitrate if you have a nitrogen atmosphere you have you provide nitrogen and access. And this is the surface only was a little bit. And this is this red nitrogen is now the surface with a minimum energy which was our computed by DFT, or very by several groups. And at KMC, we have to manage this in a dynamic way, because when this nitrogen atoms at the red ones from the adsorb, then we have always adsorb as we have always impinging of the nitrogen on the surface, and we have these option. And this should then cancel out in the mean that we have a dynamic balance or, or for the system. And now you see here, okay, you've got, we get the two by two reconstruction with the right positions as shown before. And also you have also additional are atoms absorbing at the surface, which are the same as the position differences. Also, some calculations as seen here on the structure. And the first point is okay with the energy parameters we put in the model, we are able to revisit this structure. So, and if you go now to a once again to a step for all growth. And then the system is not knowing how to arrange this two by two reconstruction because they have this terraces or fine widths. And so, I think it's very, very time steps on to see on the right side, the gross philosophy will be broke for four months later, and the entire time was two seconds for this was a system. And the challenge is typically with the systems as I already mentioned, you have large areas many steps, but if you want to resolve many steps. Later we want also to have an influence of dislocation circle. You need to be sure that the influence is then the areas the domain is largely not enough influences on the local. I did not, I did not show here the results for my position. But this is also of course, are in this future plan to do what we do. And then, very, very important is now how much time you spend and the single update. And then you see checking neighbor ones seem to be at the moment, the point which costs most of the CPU time so there's still some room for for improvement on the other reasons. So on the other one, this already the other is is interesting problem was from the, from the physics or from the system is said you could have something like this. And then it's, it stays or similar energy or very energy very is not very large. And then it's hoping for and back and forth back before something happened. Okay, this is very well, you stay for a long time in the situation, but the system is changing. So and there, they are possible solutions, first of all, you don't know what happened for a new system so you have to detect it. And once you know, then detect it then you can circumvent this problem so there are methods to do this. So that although I come to the end, or forces came see as a valuable tool to study the growth kinetics and to understand it in more detail. So, two examples, both for both first the flat surface, and then, then the typical surface which you use for growing layers are for the gallon oxide. We have a good qualitative agreement with the experimental results are the aluminum nitride is now still in, in, in, in research. And I say numerical challenges are due to the sequential algorithm, one should have no as a normal focus to accelerate the setup because it should be very, very fast to, to allow, or then many, many iterations or larger systems. And of course the other point is to use our algorithm for the super local super person. And let's say more. So, I'm not going to do with America. But one plan is to incorporate this integrated in this atomistic framework and there is this pirate framework and just as an hour. So that, that was my presentation. So essentially you described the situation just a small patch of the. Exactly, I just, yeah, you can you can ask a question and I tried to make an image or show an image. So, you have the surface, the real surface practices longer. And you have some, probably some assumption on how these issues are transported to the surface. And there is no assumption that the Mali for the came see there's an impinging rate as a particles. And that's not a reference on the surface structure. But what happened in the guest is not considered. It's not considered in practice. This is kind of homogeneous. And I say, we have galley. And for instance, on the, on the, when the oxygen molecule arrives at the surface is some is this helpful to this dissociation. There's something like like OH group in between which makes the splitting of the oxygen are less energetic or energy barrier smaller know these effects are not included. Okay, if you know that then you might also include it, but okay it will be more complicated, the model will be more complicated and then it's difficult to set the parameters. But for your other question, I wanted to show this one. This white area is a complication. This is from a year. It's much larger of course the FM has also this and bunch is not a domestic really on the TAM, you can see the effects, but you see what, what the problem is. It's quite small. The main is rather small. No, that's, that's, that's a problem. This is to give you a flavor of how small it is in comparison to the structure they saw in the in the in this experiment. So I need some suggestions on this with content you on this afterwards. Okay. So question from my side. So what you have shown is basically, and kind of planar structures right we have seen that you had information but also need like the first step of a layer as far as I understood. What happens, certainly can go to systems that would facilitate a 3D. So, very possibly we did it and you can consider a strain in it for for Germanium silicon systems, and you can also have 3D growth here that's just here. Okay, we do this applies as KMC for the systems which are of interest of my interest at the moment so definitely for the for the for the systems which is really in house as a taxi process in house, you know that there's no 3D growth so far. So, you will not consider your happy that the numerics is the same as in the experiment. But basically, but basically you, you, of course, you can have a 3D growth here for other systems that might happen for the aluminium nitride yeah we that might be another. Yeah, it is different. But it was not the main focus at the moment. People are looking for the principle they were looking for the dislocation. And from that, we do not eliminate the aluminium nitride epitaxy in our house. And by the Martin Albrecht's and to the issues which were mentioned in the title and the authors, they do not make element or epitaxy but they analyze it, and though they were meant questions come from that from the atomistic. So, this was galium oxide. This is galium oxide. So it was a gallium ethylene. And now there's not in the industry though. I don't know if I have this. Oh, the other picture which is more from the experimental. Yeah, now I have a, I was just wondering in the experimental. Yeah. Okay, this is not to do with the numerics or only hard fit, but this was experimental work here. So you see, we're all seeing. See, but this is a gross rate, and you principle what you say is you want to increase the gross rate. And you see what happened is, although experimentally, you go from here, we have staff floor is hey okay I increase the gross rate. And I have step punching whatever I'm not quite sure which that bunching they really have here. And because I said in the end, it looks the same. Or if you have just two mechanisms. So and then they, they, they change the miscut angle or the terrorist bits, and then we're able to go to a situation where they still have flat step flow. And you see then that happens once again they increase the gross rate. And then, so the increase the gallium flux here just you see, you see this, but they do, but it's not unique, but the increase the gallium flux here and here's the increase the chamber pressure and the way that they're also the, the argon flux was increased by this. And you see this is a different methods measures in the experiment where the gross rate is then, as in result different. But you see what happened is okay you can try this why I know okay they are here. And it's already one and a half two years old. So that's the way. Unfortunately, I was not yet able because we stopped this calculation score for gallium oxide some time to really follow this quantitatively with a KMC, but this is all difficult because the energy parameters are not so well known to really fit it exactly. So, this is 3D and it's two plus one. So you have to be careful with that expressions you're completely right you do not, you do not have overhanging right. So what you have is, let's see, as a meaning you have these two periodic directions, which can be stepped or not stepped because of the years of course, what is going here. And it's up by the same, right, that this is a very common, but in this sense it's 3D but you are not a lot of overhanging. You see what I mean that, that there are empty spaces between layers. If this is what you mean by 3D, otherwise it's, it's, as I said, two plus one, one typically what one can name it. So the question is, is there any, is there any, any more questions?
|
Kinetic Monte Carlo is a nice tool to study the growth kinetics of epitaxial processes on an atomistic scale. However, energy barriers for the various surface processes must be defined and thus either data from ab initio calcalutions or from dedicated experiments are required. Furthermore, processes can be handled only successively so that there is no natural way for parallization. We show two application cases: homoepitaxy of Ga2O3 and epitaxy of AlN or AlGaN on AlN. The specific numerical issues for the computation will be addressed.
|
10.5446/57513 (DOI)
|
Okay, let's start. Okay, I want to shortly talk about the IOM and what I do especially in this technique solvents. I am, I see this room so shortly we use iron beams plasma photons or electron beams to generate surfaces functional surfaces and these are in the to it which is mainly focused on experimental studies so far. And, but doing also some software development and modeling and simulation, because I'm Stefan Gersh is doing some software development for our tools, and which we need to make it structures on the nanometer scale. Martin Rudolph is doing some plasma modeling I will talk later, a little bit about what we do then we have Stefan Meyer and maybe people know him because the MS days were 2018 in Leipzig and he doing computational physics and for example developing cost-train models or models to describe inorganic material that's which is biocompatible and furthermore myself and I doing electron beam and photo chemical use rexsence I studied these things by applying a quantum chemistry approaches including also post-hardy fog methods for static electric systems with strong static electron correlation and main topic for my service also to people to take the solvents. And so this is an in moment expertise at our institute a short overview. And in the perspective, we want to build up a machine learning expertise in close cooperation with the center of source Scala with data and official intelligence, so that our in the future we can create a team of AI experts in there are integrated in the large team. And furthermore we want all the future integrate our happy series also my team into the career center Leipzig, which will be integrated in 2026. So I want to talk about some research activities of myself and what are the more technical solvents you see here to example, or what are the are taking solvents and example is a mixture some principle of a cheap organic salt, which can you can combine this with compounds and then you get a liquid at room temperature. And these are most popular because clean right is produced on the medical ton scale and fed to animals is very cheap and environment friendly, and it's solid up to 300 degrees and you can mix it for example with we are also a solid above 100 degree and then you get a liquid at room temperature. And as you can imagine these liquids are very cheap, have a low vapor pressure and our environment friendly even some bacteria can survive in the solvents. And now you can ask what is the deeper tech is so it's principle initially when they start in 2003, when they were reported by a effort, the systems then they said okay is a mixture of two compounds with significant increase or technique many point of pure compounds and you say okay, it's also salt and water and that's right. So then the phrase appears, what is the deep or tactics solvent. And what you see here is in principle the external point of view so you see here a typical or a typical tactic system which have an ideal behave have also a melting point can have a melting point decrease. And, and here you see the equation for it and the most important part is the is the first one. And, which is important so you see it depends on the difference of the melting point and the melting and the melting point of the one of the pure compounds. And when you have ideal behavior here this factor gamma is the activity coefficient is wrong. And if you have a value smaller than wrong then you have a non-ideal behavior which results in decreased melting point. So, when you say now what's deep and principle you should have should have a strong non-ideal behavior is the gamma which is significant, the wrong. Unfortunately, these data is not so easily obtained. And therefore at the moment they are still the phrase use the phrase deeper and the methods or tactic misges might be better than you will pose for the first time because you don't have all the physical chemical data to see if it's really a evil tactical sort of where fields of applications I have selected for applications examples can use time We can generate unique nano material here you see Gordon star and the golden star is covered completely by high index planes, which results in high electrochemical activity, you can also get smaller nano particles of about five nanometer size in this liquids, you can use as a solvent catalyst that the product forms a separate phase. And you can also use it to generate high stretch the non myotile ionogales, which is also an application you can use for pressure sensors and other things. And you can also use them for fun time, but I'm a station temperature with full conversion, because you have obtained a stable front and this makes them interesting for applications for some for code. Okay, now you can ask is a for carnivores with a melting point and when I started literature was suggested that the charge to liquidization according to hydrogen bonding between the highly and the any on an hydrogen bond on a majority is a sponsor for the the freezing point of the mixture relative to the madding points of the inner components. And this is something you can easily check by first principle molecular dimension simulations. And I have seen that you in case of iron liquids, which are liquid salts at room temperature that about a iron pairs as efficient to converge type of moments on a periodic boundary conditions and we have carried out some first principle model care dynamic simulations with three systems and makes up secretly a partial charge analysis. So first principle charge analysis is based on electron density, and the most important thing is here what you see this is organic compound. So when you look at the sum of the charges on the organic compound and see in the most popular example with this one with the the the the charge on the organic common is overall like that. So only in case of the exotic asset you see a significant negative charge located there, and there you can say okay, might be not the origin for the melting point depression in case of these hydrogen bonds in the system. Okay, how and what you see here is this linear trend in the negative charge located on the organic compound and what you can do now is you can look at the hydrogen bonds thanks to the systems then you have similar strategies. This is the case for nitrogen oxygen, then you can just look at the distance between the hydrogen bond donor atom and the hydrogen bond sector at atom, and you see here the RATR pair distribution function value of one means a statistical distribution and in the same way you see here the first salvation peak, and the blue one is between the anion and the oxygen atom of Colleen, and the second one, the green one is the wrong between the anion and the nitrogen atom of the hydrogen bond donor atom, for in this case the hydrogen atom, here the oxygen atom and so on. And, and what you see is, in case of the real system, the hydrogen bond between the ketone anion is stronger than between the anion and the organic compound, and therefore the charge transfer might be between the ketone anion. However, when you look in the last example, then you see here the hydrogen bond strength is stronger between the organic compound and the anion, and therefore significant charge is transferred to the organic compound of the system and you have a negative charge located. And this is also an important data for those people at the plants. So now, we have seen okay what doesn't contribute to the low melting point and we can ask okay what's might be something what we can use to discuss this is a so called energy landscape paradigm. And then the same set portion of the potential energy surface of 3% liquid or glassy region as unlike the portion as it stated with the crystalline solid a large number of minima of a rain depth so in Brunner if you have a shallow energy surface and you need no low kinetic energy to push the atoms around. And this was for example can be used to discuss properties and you the potential answer is a high complex common and you must select if you look on exam it's just one example where you see the potential energy of the surface of an ionic liquid and this when you replace this acid hydrogen atom by a material group, then place an attractive interaction by the power of form and the experiment groups have expected that this was with decrease the melting point but is it opposite to significantly and what you can look now is a first by static quantum chemistry look at the potential energy surface for the any on flipping around the cat anion and you see an overall shallow potential energy first when you have this acid hydrogen atom, when you add the material group then you see that the you have an high activation variant from the end after the anion is fixed above and below the plane and you don't know, and you see this not only for the eyes of the islands in the best case, you see this also a monocard dynamic simulation, and then you can understand for example why you have this property. And applied this to the to study, yeah, to study the our system and here we have different kinds of hydrogen bond donor atoms now system we have for example this for two hydrogen bono atom and the experimental group suggested that you have here a fast rotation of this clean oxygen atom resulting of the side change and this results in a fast hydrogen bond dynamics. So, and in this case in a rear, you don't. We're not considered but we also investigated this, and what we can do then is we can look at the lifetime of the hydrogen bond times to check where we have the fast hydrogen bond dynamics. And this is done by auto correlation functions is here so we just simply define a hydrogen bond criteria so and this is for example you can take this from the criteria distribution function look at the first minima and say okay then you have a distance cut off and you also make an angle cut off. And then defines the hydrogen one if it's there or not, then you make an auto correlation function integrated and then you get the lifetime and if you're doing so then you see that the fast hydrogen bond dynamics is not here on this clean oxygen hydrogen atom in principle are you find it the fast hydrogen bond dynamics is, and these two hydrogen atoms and it's a very fast and so on. Yeah, you have a fast. The first one is the energy of the animal. So, however, this only explains. Okay. And now you can look to the experiment and the experiment values you see here you have a rare the mix of rare and you see when you introduce one metric group. And you see a change in the mounting point. If you like to a metric groups but they still have these two items you see it's also a slight increase, but you have a strong effects if you really remove the one of the hydrogen transposition. So this now explains why you have a liquid at room temperature, but not why you have a non ideal mixing behavior. And this was the next step. So we study homologous areas of the protected solvents, starting here you see we have. Okay, here we have a rear, we have to rear material, the material, or a and another material, and for these were also experimental values, known which allows us to validate also our force field. And what we can do now is we can employ turbo proof theory to data, mind the derivative of the chemical activity from the molecular dynamics simulation. And what we obtained you see is this green curve and you see a good correlation. And, and what we can also then do is, we can calculate the, take the experiment activity coefficients and this is our red values, and you see is also good correlation. Please, this looks not like it's a strange you have inverse behavior, but actually this relative is the derivative of the chemical activity with respect to the polarity and therefore you have an inverse behavior on both graphs. And this is basically because it's not the activity coefficient is a derivative of the chemical activity. Okay, and then the next step. Yeah, the development for force field is done mainly by my PhD student or which I stable, and this force field is based on a to approach so we have a particle on a spring, which is based on the heavy atoms, which models and polarizability of the systems, and what we see is that we need a significant that we need a screening factor between the oxygen atom of the between the oxygen atom of Colleen and the anion otherwise don't get good results. And you see here the light green color this is without this damping fracture and damping factor, and the red one is for the, for another of the F, and as soon as we apply this additionally, a screening factor and parameters, then we see that both are the alpadas with magesis archer, which as an our references are the TPS molecule and egg simulations. So then we had a force field which will produce variable structure of a DFT simulation, and also the diaper moments and polarizability is also fitted on upon chemistry data, and then we can also use it to look on some experimental and to work with us very well, and there we use here diffusion coefficients which were available in the literature, and this can be determined by velocity autocorrelation functions, and you see we have an overall good match for a force field, and then we can also use the correlation to calculate the conductivity and also the pressure tension autocorrelation function calculated with sheer viscosity, and you see we have also good agreements that we have now a force field, which reproduces very well the structural data, but also the dynamic properties and now is now a good study, so and then we can for example apply this for some study some dynamic properties which is not available by a first windseam or contamination because the simulation type of the system size is not so good and we have to see the electric current autocorrelation function of the ions, and what you see is this curve here the simplicity cross correlation of the electric current autocorrelation function between cat an anion, and what you see if you have on the short time scales joint migration of cat anion and anion, but on the long time scale we do not see any joint migration. Okay. So, let's sum up this results of the demo taking someone so we see that we have a fast high chip on dynamics at the rear, which can reduce the mudding from those room temperature of lean incorporation of the anion in the high to jump on the network of area is the crucial photo observed in the joint depression, and the ion correlation is only observed on on show time scales with us no joint ion person in together, and this polar as a force fits me the additional damping function between Colleen and the animal. So, in the last short, we talked about the size which was done by Martin Rudolph, and he's doing plasma simulation as our institutes only a short overview because it's not done by my group. So, here you see the high power and put some other in so here you see the picture how it looks like when the, when the plasma is visible and this is a set up of the device. So we see we have a cut out and the anode on the cut out is the target to the material you want to put on the substrate on the anode, and you see behind the cut out you have a magnet. And in high power inputs my transporter and you have only a short pulse, which is active for on the micro make the second time scale and generate some plasma and then material is deposited towards the anode. And, yeah, and this is the systems which Martin investigates and for this he uses global discharge models. So it's about based on volume average plasma chemistry and the advantage of his models is that he can run simulations on the and, for example, what he use now for this model is, is what you want to understand is then how is the flux of the material on the substrate and how you can optimize it. And they're here use the same empirical model the ionization region model, and this is in principle fitted to some easily accepted properties like voltage and current. And, and this is then he can use this model for example to determine what is the optimal process conditions to have a large deposition of material on the material. And what is IPM model uses makes also several assumption and one assumption is that we have a B max value distribution of the electron, and, and therefore, he used a model which describes the electron or calculate the electron energy distribution. And, and you see here the assumption from the IRM these artists black lines in this graph, and then with this other model with this obelisk here calculated the, the electron distribution and what you see that the colored lines. The colored lines matches over already good. The assumption from the IRM. And so you can say okay that's the IRM model is all valid for these things. And what's important to know that obelisk model is this about four orders of magnitude slower than the IRM model. Okay, but when you want to know something about plasma simulation IRM then best is to contact Martin. And yeah, so then finally I want to say my co worker, which I see you for funding my cooperation partners and also thank you for your attention. Thank you. I'm showing you your work. And what possibilities do we have to expect our methods by installation effects or something like this for your approach. So you need activity coefficients for example, yeah. Okay, so the activity coefficients can be in principle calculated by these people with theory so now and by integration. So this is a bit of what you do is, and you have here the incorporated this is right here, I've got distribution function. Now, this is the morality and this is the chemical activity. Now, and what we have done what you see we have plotted this one because what we want to get this must integrate over the composition. Yeah, but this is a possibility what you can get from the simulations the activity coefficients.
|
Deep eutectic solvents are liquids composed of two compounds with a significant melting point depression. Most popular are mixtures of choline chloride, a provitamin produced on the megaton scale. Deep eutectic solvents composed of polymerizable compounds allow frontal polymerization at low temperature and with full conversion and thus, unique functional coatings can be obtained. To understand the origin of non-ideal mixing behavior of deep eutectic solvents, we have investigated a homologous series of choline chloride by molecular dynamics simulations. We found that the incorporation of the anion into the hydrogen bond network plays a central role for the observed properties. Additionally, we will present a polarizable force field developed to study the ion correlation in these liquids.
|
10.5446/57514 (DOI)
|
Thank you very much. Thank you for the invitation. The topic of my talks are binary black holes. I'm trying to spend the long loop from numerical simulations of binary black holes, all the way to gravitational waves and but to start out. I'd like to say I'm quite pleased to be here. Thank you very much for the invitation. I always wanted to come up here and actually visit big and I'd like to give a little thanks to your climate data, which I am a regular user of just to see how much rain we had or had not here in the area. Usually rather rain is lacking than too much. Unfortunately. Okay, two body problem is actually one of the simplest problems in physics. In gravity, it's remarkably simple. You have two point masses, they have a separation vector are you have Newtonian force of gravity moves in a central potential. It's a few line calculation to prove that point masses orbit on an ellipse, and we are done. Problem solved. In general relativity. The life is a life is a little bit more complicated. The black holes are replaced by black holes. Those are the most compact structures in general relativity without any internal structure. Like holds immediately are not point like but they have event horizons indicated here in this movie by the big yellowish and orange spheres. And the spin indicated by the color coding of the bigger bigger of the two black holes. And there is exchange of orbital angular momentum and spin angular momentum of the two black holes that leads to orbit precession. This is indicated by the purple plane that indicates the instantaneous plane of the orbit, as well as spin precession, and not visible here in advance that the ellipses or the approximately elliptical orbits aren't closed anymore, but they are processing the line to the semi major axis is processing. There's also resonances in the system. And most importantly from this talk and and for currently scientific endeavors are gravitational wave emissions, which through the emission of gravitational waves energy and momentum is lost. The two bodies come ever closer together until eventually they merge. And after merger we had this a ring down with cross in the remote emission, and they can also be black hole kicks where the remnant black hole actually has a different motion than the original center of mass, owing to the loss of linear momentum through the gravitational waves. So there's a lot of stuff going on here. And among all this stuff, and quite importantly from my perspective, along the way we also have to solve the full Einstein equations of general relativity. The rest of my talk to give you an outline. I'll first talk about give you some background about gravity black holes and gravitational waves. Then I switch over to the market relativity. How one goes about to solve Einstein's equations and supercomputers to actually obtain the type of results. And then we was illustrating. And then hopefully you have a few minutes left to talk about gravitational wave astronomy and to illustrate how these numerical simulations help help us understand gravitational waves and what they mean, what they tell us about black holes, the universe and fundamental physics. So let's start with background. So the theory consists of starts with the assumption that space and time are curved with a non nuclear geometry. For instance, the circumference of a circle is no longer two pi times the radius. And the time flows also at a variable rate. The curvature of space and time is determined by Einstein's equations where he on the left hand side. The R represents the curvature. And the right hand side presents the energy in the mass content of the universe. So masses in the universe create curvature illustrated here in this picture with the big sun sitting here and below an illustration of the curvature of space and time. And indicating the curvature and the earth now feels the curvature of the sun. And that way, the earth knows how to move around the sun. So the curve travels at the speed of light in the right setting. There are wave equations within the system that tell you how small perturbations of space and time propagate in space according to wave equation coincidentally with the speed of light. Black holes in this picture are represented by a tear in the space time, a point where singular where the curvature becomes infinitely strong. And they're represented by vacuum with a singularity in the space time itself. They're also made entirely of curved space time, they're vacuum objects. They can rotate where the rotation means that space is actually tracked around the black hole, one cannot stand still close to the black hole. And the curvature is so strong that also the causal structure of space changes. So this is illustrated in the bottom panel here. And that's a space time diagram. And I'm illustrating a few light cones. So far away from the black hole, light can move towards the black hole and move away from the black hole. But the closer you get to the black hole, the more the light cone tips towards the interior of the black hole. And there is a critical surface where the outgoing light rays actually move tangent to the time axis of the black hole, they stay at the same radius. So the direction of the light cone has completely tipped over and points entirely towards the center of the black hole in the middle of the blackish tube. All material objects have to move within the light cone because nothing can move as fast as the speed of light and light itself is going to move on the light cone by definition. So once the light cone has completely tipped over, since everything is moving inside the light cone, there's no way of actually getting outside this region of space time. And this boundary surface here is called the event horizon. It's not a material surface. It's literally just an empty space version of space. But once you're on the other side, because of the causal structure of space time, you can never make it back out again. The area of the event horizon is well defined. You can take your plating geometry to assign a radius to it. This whole code schwarzschild radius. And this is three kilometers for a solar mass black hole, and it grows proportionally to the mass of the black hole. The biggest black holes are M87 and and similar breath when and I you've heard a talk about this on Monday by Luciano. I think I have to tell you about massive. I'd like to show you instead a little bit more about from the numerical simulations. Here I calculation that actually shows the event horizon itself. There's a really quite interesting behavior that it actually just before much of the two black holes here in the middle, the event horizon reaches out and forms this tier shaped lines connecting first in the middle before then the bulk of the black holes also connects with each other. This, this can be understood in terms of the definition of the event horizon event horizon is defined by the null geodesics the null ways that just don't make it out from the black holes to future infinity. And if you're early on you have your two regular spherical event horizons the two black holes, but here in the middle in order for geodesics to make it away. They have to overcome the gravity of both black holes left and right. And so it's particularly hard to get away from the middle. And therefore the regional space time, where you actually already captured by the black holes is bigger than one would my evening. This but also illustrates a few of the future event horizon generators. So here, for instance, is a light ray and null null geodesic that first arrives from somewhere far away from the two merging black holes. But this geodesic has been chosen very carefully, such that it actually at the same of the pair of the pants, joins the event horizon and then continues on on the event horizon into the future. This is an illustration of walking and roses event horizon theorems the global theorems that event horizons are foliated by null geodesics, and that null geodesics can join horizon on acoustic. So a lot of cool things are going on on the theoretical side. But there's also actual astrophysical observations of black holes, both stellar mass black holes like sickness x one, where one can see x rays from the equation disk around the black hole in the middle. The equation disk is fed by a star nearby that loses material onto the black hole. First the equation disk and then like all itself. But also there are supermassive back holes in the centers of galaxies like our own Milky Way, as well as M87 a. And I'm showing this movies because I find these particular observations just really really impressive. And they are fully observing of the Nobel Prize that you received last year. Last year, not this year, actually 2020 wasn't it. I'm messing up my dates. Let's move on. The last movie for a while about black holes. So this is now how a representative like a Virgo stellar mass by cold binary actually looks like at least visualizations. We have no surfaces of the black holes. Therefore, I'm not showing any services here. I'm showing the equatorial plane of the binary where the height indicates curvature of space and the color coding time dilation rate of flow of time. The black holes have already approached each other emitting gravitational waves. You can see them out here in the back in purple. And now is the time of the merge of the two black holes, which is an event that's really really short in time. And afterwards the cross normal ringing is also a very, very short amount of time. And zooming out, you see the strongest gravitational waves from the merger propagating out. So a few of the facts that are representative in this body in this movie, there aren't any bodies. So we have vacuum and Romanian geometry. The two black holes are of comparable mass. Pretty much all the systems observed in gravitational waves so far comparable masses. The inspiral is very fast and only about 10 or 50 or so of orbits are actually visible in gravitational waves. There's only modest procession and the accent, the orbital accent, where city tends to be to be zero, because it has been radiated away in the preceding. What this movie doesn't show at all is the fact that gravitational waves are going to be emitted in all directions and not just in the equatorial. Let me give you a little sense of scale, what we are talking about. So the typical black holes, this talk is about are about have a diameter of about 100 kilometers or so. They have masses of about 30 solar masses. And they orbit at an initial distance of about 500 or so kilometers with about 20 orbits per second. So the black holes will go the distance from Poland to France and back 20 times per second, moving at 50,000 kilometers per second. And the whole process, the movie was just showing in reality takes about 0.1 seconds only. In this time, in this point one seconds about free solar masses of rest mass energy are emitted in gravitational waves. And this is your luminosity of about 30 solar masses per second. And if you compare this to the luminosity of the sum that emits point 007 times its rest mass over its entire life of 10 to the 10 years. You see that that gravitational wave emission events will be highly energetic as long as they are ongoing. And the peak gravitational wave energy emission is about 10 to the 2523 times the luminosity of the sun, approximately 10 to 50 times the entire luminosity of the observable universe. All of this energy goes into gravitational waves, like holds up like there isn't any light emitted. And so the whole process of binary black holes is invisible to electromagnetic telescopes. Unless of course, that happens to be metamere by such that the matter itself can emit photons, like you have an equation to discover. Okay, so this is the process we're looking into. Two massive black holes about 10 to 30 for solar masses each orbiting about each other and and merging. And there's a few different tools around that are used to calculate this process. I'm showing you on the x axis, the frequency of the system, increasing to the right corresponding to ever tighter orbits leading to Merger and ring down as you go very far to the right. And early in the inspire well, when the two bodies are widely separated and moving slowly, the system can be calculated with post Newtonian theory, which is in perturbative expansion and velocity of the binaries. And so out of this come after really complicated calculations, very long, perturbative expansions that go up to quite high powers of the overseas up to the overseas to the eight by now. On the opposite side, the ring down of one particular black hole into a quiescent black hole can be modeled with black hole perturbation theory, and you can compute the frequencies and decay rates of various cross and normal modes as a function of the black spin and the black hole mass is indicated in this plot here. One feature there is that the patrol black holes are amazingly dissipative. They are the objects with the lowest q factors, among all the objects known. So it's not like these are actually church belts, but they only perform one or two or three cross and normal periods, and then the ring down has already been created. In the middle is the realm of numerical relativity, where velocities are too large for post Newtonian theory, approaching the speed of light, where the systems are highly dynamic and highly asymmetric because you have the two bodies, matching with each other. And by the systems are yet well to deformed in order to do the code perturbation. Yes, there's also other approaches like small mass ratio perturbation theory at the high mass ratios, but those aren't really needed for my talk, because the primary feature I want you to take away that that for the observations that have been made so far. Those are lying in the regional parameters base where numerical relativity tends to be of utmost importance systems of comparable mass, that's where she goes to one where the late inspired and the merger is visible in gravitational waves, the region where and that's why I want to talk about numerical relativity. So how does one actually go about solving Einstein's equations. The task is quite simple. We want the space time metric GAB that satisfies Einstein's equations, which are quite complicated partial differential equations of the 10 metric. And it arises first because Einstein was so amazingly nice that he has formulated everything coordinated invariant, not even having a fixed time and space directions anymore. And by in order to do anything with computers, we need to split the space time back into space and time. So we take the nice four dimensional space time volume indicated here in very light gray and put in hypersurfaces that we are labeling by the respective time coordinate. And you can see the two three as you go up in time here representing a black hole that grows by a creation of gravitational waves. You can perform now a split into space and time. And it turns that out that Einstein's equations turn into a set of evolution equations that tell you the time derivatives of certain quantities. As a set of constraints that contain the same quantities that she and the K, however, without any time derivatives. And it is quite similar to Maxwell's equations where the the current equations tell you how to evolve in before what in time, but not all in fields are allowed by Maxwell's equations, but they have to satisfy the divergence constraints in vacuum, both the and the field have to be less than the field. So we have constraints that we have to solve, we even start simulations. And we have evolution equations that then tell us how to go forward. So all of this looks quite simple. So, why is this hard. The equations I was just sketching out for you here. Those are called the ADM equations that have been known since the 60s. They turn out to be ill post. So they have to be rewritten as a well post hyperbolic system. There are singularities in the middle of the black holes, where curvature becomes infinite, where accelerations become infinite, and someone has to deal with the singularities. The constraints, the equivalent to the divergence equations of Maxwell's equations in gr they are nonlinear equations quite complicated. And they tend to any small constraint violation that arises because of miracle error tends to increase exponentially in time on a time scale of the light crossing time of the black hole. The movies wouldn't be the simulations wouldn't be possible at all because the constraint violations would destroy everything. You also have to choose coordinates for a space time that we don't know yet. And we have to face a lot of numerical challenges order 50 variables to be evolving. Right hand sides of the evolution equations that take about 10,000 floating point operations per grid point. There's also different length scales, small black holes compared to the long wavelength of the gravitational waves. And we need quite high accuracy in order to support the gravitational wave. So a lot is going on. And the field actually has a very, very long history with the first miracle calculation dating back to 1964. Simulating the head on Merger of two black holes on a grid with 51 by 151 grid points, taking four minutes per time step and doing 50 times that. So, first groundbreaking simulation. It took about 30 years from the 60s until the middle of the 2000s to understand all the difficulties and complexities of Einstein's equations and to device computer codes that can circumvent all these problems. And around starting 2005, like all simulations are possible. And so all the results I'm showing you are from the time 2005 and thereafter. The very first simulations look quite modest from today's standard. Basically no inspire visible yet, but only the merger waveform and the ring down here I'm seeing you seeing how quickly black holes ring down to a recycle and they're gone. And the big feature here in this first very first calculation is that what you're seeing is actually gravitational waves measured at three different distances from the black holes, and they are all on top of each other. So it was the big triumph was that the simulation was stable, and that you actually got traveling features that are moving away from from the emerging. By now two main approaches to simulating binary black holes have been established. I'm not going through the details here. The interesting feature is, they differ in basically any choice you can make. They differ in how they do initial data, they differ in what type of evolution equations are being used. They differ in the coordinate choices they differ in the outer boundary conditions they differ in the numerical methods, find a differences with a spectral. And the less they lead to consistent results. So we are we actually quite confident in the community that we know what we are doing and that we can solve by the records correct. I'm highlighting some some facts about this right approach here, because this is work by my own collaboration using the spectral Einstein code in the small dating a stream space times collaboration. We are passing a AI, Caltech Cornell and a few other North American universities. And in our approach in spectral methods, we expand our solutions and basis functions. So, we have some basis functions, chubby chef polynomials, spherical harmonics as a function of space that are multiplied by time dependent by time dependent methods, and finding a solution you have x and t now boils down to finding the, the coefficients as a function of time. What one nice feature of spectral methods is that, because you know the basis functions like chubby chef polynomials. It is trivial to compute derivatives, you can do so exactly by, by taking the derivatives of the basis functions. So, we are going to do that in physical space. And the other really important feature of spectral methods, if the problem is smooth. This type of series expansion converges exponentially quickly in the number of basis functions. So this will allow us, as I will show later on to reach quite high accuracy with comparatively modest computational cost. So, first the black holes we decided to do a domain decomposition, where close to each of the black holes here in the middle, we expand in circle of money shells with circle of money. At intermediate distances, there are cylinders and blocks, and at large distances, there's more spherical shells that are circum, encompassing both systems at the same time. So, this allows us to black hole accession, which I'm going into the next slide. This allows us to do adaptive resolution. And it also allows for parallelization by putting different elements on different processes. I would also like to give you a flavor of the Einstein evolution equations, how one actually goes about fixing these nasty constraints that tend to be blowing up exponentially. In our approach of solving Einstein's equations, we start directly from the four dimensional version of Einstein's equations written down here on the left hand side. And remember, to remind you, GAB is a four by four matrix, the metric of space time. This is the unknown that we want to solve for. And if you go through the math, it turns out Einstein's equations turn into a wave operator acting on each component of the metric separately. Some other second order terms that can be schematically written this way here, those are gradients of the Christoffel symbols for those who know differential geometry. Plus several thousand lower order terms that have no impact on the mathematical structure of the equations, but can make life, but have to be coded correctly. The big trick in solving this equation, despite this middle term here, which actually destroys hyperboleicity of the wave equation, is to deal with this middle term by choosing suitable coordinates. It turns out these gamma symbols here are related to the choice of coordinates. So the trick is to choose coordinates such as the wave operator acting on the metric equals unknown function. That way, box of X is a known function box of X is armies gammas. And the gradient of gamma is the gradient of unknown function, and has been removed from the leading order second order principle parts of the system. So the trick is to use coordinate conditions to eliminate terms that destroy hyperboleicity. And step two now is to deal with the constraints, which now take the form that the functions age that we have chosen to simplify the our evolution systems, that they must continue to be equal to the to a certain wave operator acting on the metric on the coordinates themselves. The constraints again would be blowing up exponentially in the simulation. So the big trick that helped the field proceed that goes under the name of constraint and the idea is to modify the evolution equations by hand, adding this new red term here. This new term is proportional to the constraint violation itself, the capital C's. If the constraints are zero, the situation we want, then we have not modified the equations at all. However, if the if the constraints become non zero, then this extra term modifies the behavior of the equations, such that the constraints are exponentially damped back to zero again on a timescale gamma, which is a freely chooseable parameter. The limitations of the of the constraints were and combined with writing equations in a hyperbolic form, with the two most important fundamental mathematical steps, making black hole simulations work. In the simulations, we are taking care of the black holes by not taking care of them. I already illustrated to you earlier that light concept to being over as you go inside the black hole. We are placing an excision boundary just inside the event horizon of the black holes, where the light cons have been completely tipped over. Because Einstein's equations that used to wave equations with speeds at most the speed of light, all the information propagates inside the light cone. And therefore, at this red excision boundary, we do not need any boundary conditions. It's a perfect outflow condition. It allows us to do black hole simulations without having to worry about the colds. But it requires a lot of extra complexities. For instance, we have to adjust and adopt the grid to follow the motion of the shapes of the black hole event horizons, very, very carefully. And so, our excision boundaries are actually not just spheres, but they are quite complicated, deformed objects like you. And so, we have boundary conditions that I will skip. And at the end of the day, what all of these numerical tools give us on the one side, beautiful gravitational waveforms, covering the inspiral here, up to the merger and then ring down. The panels show the same data, linear y scale and logarithmic y scale. And so the lower panel illustrates very clearly our good dynamic range in the merger where we can resolve the ring down over five orders of magnitude just after merger. Perhaps most important on the slide is the right panel, which demonstrates the numerical convergence of this quite complicated spectral scheme. In the revolution where we have something like 40 cube basis functions. The overall facing arrow to the inspire world is perhaps one or 10 radians, totally insufficient for gravitational wave astronomy, but only increasing resolution from 40 cubed to 60 cubed factor 1.5 linear increase. And this is a very quick improvement is afforded by the spectrum. Here's one example what you can what we can do with quite accurate codes, a comparison to the post Newtonian results of the earlier in spiral. And then here, a plot where the x axis covers the last 30 gravitational wave cycles the last 15 orbits of the inspire world to black holes. And then going on the y axis, the difference between our numerical relativity simulation, they have just shown you that we have hours of about point or one on the scale here, compared to the post Newtonian calculations that are used for the earlier in spiral. So this particular case, I have aligned things such that 20 cycles before merger post Newtonian perfectly agrees with numerical relativity. And isn't if I now go closer to merger you see that post Newtonian begins to deviate from the productivity. So this is no big surprise the closer to merger the worst post Newtonian gets, but here post Newtonian also gets worse at earlier times, when the separation of the black holes is still larger. So this indicates that post Newtonian order to up to terms of velocity to the fourth power isn't good enough at all, even for this simplest of cases. See if one increases the post Newtonian order, things are improving. And so, going up to the third point five post Newtonian order for this quite simple system. You see very good agreement between post Newtonian and our calculations for the first 10 gravitational wave cycles, and then post Newtonian still begins to diverge from the numerical. So these types of simulations comparisons can both benchmark and validate the miracle simulations. And they can also give us a sense. How long simulations we need how long you can trust the earlier post Newtonian results. And many parts in the field many people in the field were surprised how far ahead of merger post Newtonian already becomes in that. So these simulations, a big parameter space exploration has started, where one was simulating black holes with different masses and different spins. 2009 about 20 simulations have been done worldwide 2013 that was about 200. And then more and more work done by now there's about 2000 simulations with ever increasing quality that have been completed by our collaboration. And also that have been completed by other groups. Each simulation tends to take about 100,000 CPU hours between 10 and 100,000. And this is simply said, in order to be able to keep pace with the accuracy requirements of the ever improving gravitational wave. So there's a lot of computing time and effort sitting in plots like this one here, where each, each data point is only represented by a single point here with so many simulations that to represent the masses and the spins of the parameters of the system studied. We can't even show waveforms anymore like we did in earlier in earlier. So a lot has been done in numerical relativity. And now let me switch over as a brief interlude showing you this this image of a simulated Milky Way galaxy so this is this is actually a completely synthetic synthetic image. There isn't any, any true telescope involved here. And I am highlighting this star here, just to illustrate what now happens if we put a binary black hole between us and this galaxy. What happens is that the two black holes sitting here, he forms based time in the middle. And that means the light ray going from the galaxy to us now has to go around the two black holes. And so the original star I was highlighting actually has moved a little bit to the lower right. I'm flipping back and forth. This is more obvious. Because of the courage of the black holes, there's also other ways light can reach us from this Milky Way and from the star in the background. For instance, you can go around on the left hand side of the back holes. It's a longer way where you need stronger deflection, but still you can identify the very same star here on the left hand side as well. You can also go through the middle that gives you the yellows yellow spot here, and you can go really crazy ways. You can go around the one black hole using its deflection so much that the light ray actually goes back to the back going around the second black hole. And then it's actually emerging right here really, really close to the black disc of the actual black hole where no light rays can escape. I'm explaining this in so much detail. So you have a little bit background to this movie, which unfortunately has been put shut by soon quite a bit because it has reduced the resolution illustrating how this would actually look like if we could see a binary black hole mergers through light distorted around the two black holes that were coming from below and arriving at telescopes here on Earth. I'm apologizing for the bad quality due to soon. But this is what a binary black hole merger really would be looking like. The whole process taking about point one seconds again for gravitational wave black holes like the ones visible in LIGO and Virgo and its preference. The gravitational wave astronomy uses quite a few detectors worldwide to in North America like the two Legos, Gio and Virgo in Europe, and just joining the effort is the Kaka detector in Japan, and this one being under construction in India. These detectors have already observed quite a few astrophysical systems, binary black holes binary neutron stars, black hole neutron star binaries. There are also about 100 events. That is, that are currently represented by our like a Virgo collaboration with this nice man style blood. Really, the whole is the vertical axis is the mass of the bodies. Blue are binary black holes observed by gravitational waves, typically roughly about 20 solar masses or so. And the binary neutron stars observed by gravitational waves, and somewhere in here also to do things combining a black hole and a neutron star to form binaries of next time. The first binary black hole was discovered in 2015. It corresponded to 29 plus 36 solar masses that merged into a black hole of 62 solar masses. I've shown you earlier than the map of Europe to give you a sense of scale. That was the discovery of binary black holes. That was the discovery of gravitational waves. And among astrophysical properties like the masses, it was also determined that the signal was fully consistent with general relativity as I described early in the talk. In the illustration of this, the top panels are the observed gravitational wave forms as measured by Lego. Virgo was an operation at that time. The middle panel show numerical relativity calculations actually computed with the code I was describing earlier, the spec code. And the third sets of panels show the difference, the residual, which is consistent with noise illustrating my statement in a very visual way that the signals are consistent with general relativity. Just to remind you, way from knowledge in this whole process is utterly essential. And therefore, this relations we are doing. First of all, signals are detected by matched filtering where you need filter templates that have already the expected gravitational wave shapes. If you want to know masses and spins and other properties of the systems. This is done by comparing the measured gravitational wave form with the theoretical expectations. For some masses, like 36 plus 29, there's very good agreement between the theoretical prediction and the measurement. For others, like 66 up here, there is absolutely no agreement. And the region with agreement color coded here is then deemed to be the measurement of the masses and spins and other properties of the codes. This requires knowing the wave forms. Testing general relativity, of course, also requires knowing the wave forms. How can you test a theory if you don't know the prediction of the theory. So, she is important in all of this in my relativity. And I'll just highlight one more system here to illustrate that this is an ongoing process. And that even the very latest improvements that are being done on the theoretical side are actually important for, for the, for this for the gravitational wave instruments. This plot illustrates a one to 10 mass ratio merger. And what is shown here in the extra panels is that, in addition to the leading order to polar gravitational wave modes in the main panel. And also in closer to merger sub dominant modes higher order modes become also important octopolar 16 polar 32 polar and 64 polar gravitational waves. And over the last few years, these sub leading information was added into the gravitational wave models based on the record of the simulations. Its very practical impact is illustrated here in this panel, showing the mass of the little object. It has about 2.6 or so solar masses. And so a big question is, is this a black hole or is this a neutron star. And so the mass is up to about 2.5 or so solar masses with the older simpler way for models to do parameter estimations, one only got this very broad distribution of possible masses of the secondary going all the way down to 2.2 solar masses. However, with the fancy new gravitational wave models, the mass could be constrained to be between 2.5 and 2.7 solar masses. So this really helps analyzing data in ways. This would be another example where the new way for models actually help. But this example also shows that there's still difference in depending which type of way for model you use to analyze data. Here's two different ones. They give slightly different mass ratios and slightly different spins. Nothing too egregious, but still it shows that we are close to where we want to be in terms of accuracy. And that we also need to improve the modeling because of future detectors becoming more, more sensitive. LIGO is going to take data again, starting the end of 2022 at higher sensitivity, as well as Virgo and Kakron. But is beyond the current like on Virgo detectors. There are also plans for future ground based detector like the Einstein telescope that are supposed to be approximately 10 times more sensitive and require better waveforms to exploit. There's a lot more frequency spectrum to exploit. I've been talking about high frequency gravitational waves with frequency of about 100 hertz or so. There's also lower frequency gravitational waves that can be traced back to instance to the merge of galaxies. And these have supermassive back holes in the centers. And if two galaxies merge, the two black holes at the centers eventually also find each other and merge. And these gravitational waves end up in the millihertz regime end up being searched for with space based detectors Lisa to fly in the 2030s. And even lower frequencies, one can look for stochastic gravitational wave background from inflation in the early universe with pulsar timing arrays. And so there's a lot going on in different frequency bands and the current LIGO Virgo results are just the tip of the iceberg. And just to rate, what a small tip of the iceberg we currently have is a plot of black hole masses from 10 to a million or 100 million. And here's the distance back in the universe between zero mega giga parsecs and 200 giga parsecs. And the range currently covered by like when Virgo is this little spot down here, nearby universe up to about 100 solar masses. And this code will cover this region here of the parameter space. Lisa will cover this region here in the parameter space. So ultimately, like holds will be visible in all masses and throughout the universe in the coming decades. So there's an exciting road ahead of us. And some of us, like holds are decided to your objects. numerical calculations investigate these black holes and the gravitational waves and these numerical relativity simulations also feed into gravitational wave astronomy, which has opened a new window to observe the universe. Thank you very much. Thanks for fascinating lecture thanks also for making me appreciate the simplicity of climate modeling. Just kidding, of course, are there any questions. Okay, so your user is a little bit more technical question you're using the domain decomposition method with spectral basis, as far as understand. And how do you do the coupling so do you let's say, I'm constrain your basis functions to the domains or do the basis functions overlap any different domains and you interpolate between them or how do you do this. So we have overlapping basis functions and they always led to instabilities. So by now we don't do any overlap anymore we do elements that are touching each other. And at the boundary, we decompose the fear, our evolution fields into characteristic modes that have certain characteristics beats relative to the boundary. Whatever is going left to the boundary, you take the data from the right domain and use it as boundary conditions on the left domain. And whatever is going right, we use the data on the left domain and use it as boundary conditions for the right domain. Okay. And it's, it's a fairly similar setup on internal boundaries as it is on external boundaries. The biggest difference is that at external boundaries we have by hand, give boundary conditions whereas internally, then the neighboring domains automatically provide the boundary data. Okay. Coming back to your remark about simplicity of climate modeling. I respectfully disagree. Okay, perhaps the most fundamental difference is that in what I've described. We, we know the equations you're solving for. Oh yes, yeah, that helps. So, and our task is to solve it very, very accurately. Whereas in climate modeling I'm always completely frightened by the wide variety of different physical effects that come in. One has to worry about. So am I. I'm not sure what the answer is. Well, this does not to appear to be the case. So thanks.
|
Several times per hour, a pair of black holes coalesces somewhere in the observable Universe. Direct supercomputer calculations of binary black holes elucidate the dynamics of warped space-time and underpin gravitational wave observations of these systems. This talk introduces the techniques of such simulations and their application to gravitational wave astronomy. We summarize current observational results and future challenges.
|
10.5446/57429 (DOI)
|
A smaller holder farmlands with specific crops for fielding using tensor flow library alongside satellite imagery both optical and radar. She's currently working with a team that is identifying cashews in Bernin using the above mentioned technologies. So far the model they have created identify cashews at 77% accuracy. My name is Lydia and I'm going to be discussing artificial and intelligence. I'll be talking about some of the work that you've done so far and how we experience the unintelligent part of artificial intelligence. Currently I am working with GeoGecko on a project called Fieldy Focus. The purpose of Fieldy Focus is to de-risk agri-businesses supply chains by quantifying crop acreage in small holder dominated environments. So this means that as the ML engineer working on this project with the rest of that team, our role is to build those models which are going to be used to identify the crop acreage. And yes, that's a large one here with machine learning last year. We're working on bananas and trying to identify banana crops in western part of Uganda. So we did that for some time and then we also got some data on maize and we did that as well. And early this year we were fortunate to get some data on cashew which we obtained from Radiant Earth and this data is on what the locations of cashew farms in Benin. Our best data set so far was the cashew and when we worked with it, we created one of probably currently our best model. We are going to accuracy of about 77%. We intend to also work with other crops like cotton and palm oil. Eventually we are working on sourcing the data and trying to then build the models around that. Okay, now straight into the more interesting parts of the presentation, we are going to talk about the mistakes that we actually made, the wrong assumptions that we had in the beginning and what we learned and how we moved on from those challenges that we experienced. So I will break this down into two groups. First discuss misunderstandings or mistakes, challenges in regard to data and techniques and then I'll also discuss it in regard to the ideologies behind machine learning. So first of all, as far as data is concerned, the biggest myth is the more data you have, the better the model that you are going to build. So we started thinking this as well. We were so excited. We go to the data set that had over 7,000 points. So we were very happy. We were ready to get our hands into it and start working and get beautiful models. But so when we opened up the data, which is I think like the first thing that happens is you receive the data, everyone says the data is amazing. You read the documentation of the data. It looks beautiful. You look through some of the metadata. It looks promising and then you open it up. So points were not where points were claiming to be. There were problems with your location. Some of the locations were not as accurate as the people collected the data thought it was. We had points that were falling in the ocean. We had points that were claiming to be fields but were sitting on what was clearly urban area. So we ended up losing a lot of data. And also it made us question the integrity of the data that we had. So it made us a bit skeptical to work with that particular data set. And yeah, that's where the excitement died two weeks later. However, there are beta sources of data out there that we're also able to get that may not be as large but are properly sourced, they are properly collected, properly labeled and their locations which is probably one of the most important things is actually available. So yes, that brings me on to my next point which is all that is data. All that is not data because data is highly dependent on what you want to use it for. It's important. If you are trying to build a machine learning model that's going to identify a particular curve and you have all these wonderful images, satellite images, you have these polygons of fields but you don't have basic information like is this an intercropped field, is this a monocropped field, meaning in the case of the African context, smallholder context, there are farmers who carry out intercropping where they plant one crop with another crop within one farm. So the spectrosignature for that particular field is going to be different from the spectrosignature of another field that is monocropped which is one crop in the field. So if such distinctions are not made then we are going to be mistraining our data and that's just going to lead into like errors as you continue far along the modelling chain. So all data is non-data, some data is data. What is data? It depends on what you need. For our case we required that when we receive a data set it should have geolocation at a bare minimum, then it should be labeled in terms of what crops are being grown there and there should be details on whether it is monocropped or it is intercropped and also we need information on the growing seasons because also this ended up being a very big deal because of course Africa or of course I think the world but to speak for the context that I am quite certain of and then that I work in there are multiple seasons in different regions and those seasons vary from region to region. So what would be a growing season in Washi may not be a growing season in the next country. So with that information we need to know at what point was this crop in the farm. So if we have no information on when the crop was planted or when the crop was harvested then we do not know for what time period we should download the images. So number three, you do not need to know the patterns and the trends in your data because there are algorithms that do this. So if I am beginning we thought that we could get away without exploring our data properly because we had amazing algorithms that exist in terms of law that can help us to identify the patterns in the data because there are those promises that are made. Unfortunately there are certain things in your data that you have to find out for yourself. So with this different crops have different behaviors in regard to how they grow and how they manifest in the field. So if you are to look at a perennial crop in comparison to a perennial crop the trends or values of its like spectral characteristics are going to be different. So if you are trying to identify a perennial crop you need to know okay this perennial crop stays in the field for over let's say three years so I can get data for one year and analyze that for the entire year from the beginning of the year to the end of the year and then analyze that to try and define a trend. However the same approach will not work if you are looking at perennial crop sorry within your data set because if you are looking at an annual crop annual crop stays in the field for about three months sometimes four and then they are out. So at that point you need to make sure that you plan how you are going to handle your methodology based on those patterns that exist in the data set that you are looking at. That is also something that is very important that we also had to try and learn through the process. Also you will notice that based on the different classes that you have the different classes are going to behave differently in terms of spectral characteristics. If you look at classes like urban, you look at classes like forest, you look at classes like bare land, water they may be easy to differentiate but if now you bring a perennial crop and you are also looking at forest you need to study your data better to see what are the actual differences are there any differences between what is forest and what this particular annual crop is let's say if it is. Now like Naka so Kashu, Kashu is a crop that is in the field the entire year. It's also a bit dense biomass wise so how do you differentiate between that and other vegetation so things like that. So if you look through the data that you have go through the trends and understand them yourself before you can actually implement all of this within the particular model that you've chosen to use. So next after that you are going to look at the ideologies, the model is the algorithm. So most people they feel if you choose the right algorithm you will build the perfect model because if you have the perfect algorithm it's going to find the trends in the data properly if you have like the right method this is going to like this is the tech wise this has been proven to be the best algorithm let's say you're going to use like a CNN model then you're going to like RELU maybe ADAM and like details like that and if you like those details I would make up the model. I think with time we realize that that's not true because as you build the model there's a lot more in that model that goes into the building of proper model than just choosing the right algorithms. You could have the best algorithms because someone would say well I want to use like the nearest nightbar, I want to use a mandem forest, I want to use you know whatever you want to use but what's the procedure to get into that point because before you get to select in your model have you cleaned the data? Have you preprocessed your data to make sure that everything like is as it's supposed to be? Did you remove clouds or are some of your training data still clouds? And these details are the things that actually end up feeding into your model's either well performance or poor performance and I feel to an extent they may end up even being more important than which algorithm you have chosen because sometimes you can choose the best algorithms but if you're working with data that's not cleaned and processed as well as it should be then you're not going to achieve the levels of accuracy that you're looking for. Then the second ideology is the higher the accuracy the better the model. So in the beginning when we started writing for you know the best accuracy possible and you're thinking well I want to get like a 90% accuracy so it's a tweak this and tweak that let's get that accuracy higher if we still have the accuracy still at like 68% I think we can do better and you know you get an accuracy let me say this time I had an accuracy of about 88% was so excited. The level of overfitting that was within that model was insane. It was the accuracy was great because it was picking basically everything and so since most of our training data was of the particular class that we were trying to identify it picked everything that we were trying to almost everything that we were trying to identify but it wasn't because it was properly differentiating between the different classes but it was just basically an overclassification. So when you get a specific values of accuracy I think like at this point we scrutinize the accuracy a bit further especially if it is or not to go too bit too much but a bit better than what you had anticipated based on the data that you had so that has been something that we have had to learn but it's quite exciting in the beginning when you get a very high accuracy and you're ready to be done with our projects and move on and you're like yeah this is amazing, this is done, I think we have like the best I don't know what this is about been doing and then you get in the new testing data that the model has been doing and then you move on from 90% accuracy to about 30% in the testing data because the model didn't perform as it was claimed to be performing. Another ideology is that simple models can perform just as well as complex models. So this one I think will run for some feathers because a lot of people still do believe that simple models can do what complex models can do especially given a lower quantity of data but in the other that we've done so as I was saying previously we've worked with other groups like Banana and Maze and during that time our methodology was different from the methodology that we ended up implementing for the cashew. So by that time we were using models built in Psykit-Lan which yeah okay so I think you are aware of Psykit-Lan so we're using some of those we're building the model from Psykit-Lan using stuff like random forest and precision trees to run the modeling and that went well for a while but we were hitting a wall with like a lot of cases of over classification we were looking at accuracies that we didn't really have a lot of ways to improve so at that point we considered moving on to trying TensorFlow since it was well like well documented quite talked about quite a lot so we moved into that into that and it actually ended up being one of the better things that we did because with the Psykit-Lan or Psykit-Lan models we were not able to implement things like the temporal aspect temporal type of modeling so with TensorFlow we had the opportunity to do that and instead of the model just basing on these data points as how do I say we're able to analyze the data more in terms of the trends of the particular values for a particular class over time as opposed to looking at it in terms of static one point in time I don't know if that makes sense but if you look into it there are models that can better implement the temporal aspect of the data than others and as such we were able to find models of that nature when we moved to more complex models in TensorFlow that we weren't able to do when we were still working with Psykit-Lan. Ideology number three, mistakes are to be avoided. So I remember when we had just started we felt like press for time or want to get this done want to do it right however so the best way to do it is to make sure that we are doing things perfectly. I think for anyone who has done any work with machine learning or even just regular programming things don't go as planned initially especially if you're not aware of the way the full method works up to the end. So we made a lot of mistakes because like there are times as I said like all these things that we've discussed have been the mistakes that we've made not paying attention that much attention to the data and focusing more on the model thinking that the model is about the algorithm going ahead to make sure that we are achieving the best highest accuracy possible without considering the other factors not looking at the trends within the data not studying the actual characteristics of the particular crops that we are looking at or like the other classes that we are trying to study. So these are the things that eventually like as you make the mistake and you fail then you go back and you try to rectify it. So one of the biggest I think mantras that we go by at the office is you know if it fails we plant something. So there's no fail because when it has failed at least we've found one way that will not work so then we want to find that way that will actually work. And yeah so those are the mistakes that we made and those are the like myths that we thought we should bust. So yes that is it. So I wrote something that I thought would be great way to end this conversation and I will just read it as was written. ML is an amazing technology. It has great capabilities but also great potential for misinformation. And we need to be responsible users of this technology by being open about its strengths but also its shortcomings. Thank you very much and I wish you all the best for the rest of the post-poetry. If you have any questions you can feel free to ask. Thank you. Bye. I have just a few questions that have been asked too in particular. The first one is do we have any published guidelines for the methodology or data collection plan? Unfortunately we've not published our findings and for most of the guidelines that we are currently using we've collected from multiple sources so I can't really point to one particular resource but we are planning on putting something together that we'll discuss in detail the methodology or slash data collection plan. Second question is are there more in pre-tability techniques you would recommend for data bugging? So with this question I think I do not follow it very well because right now what we are doing is as far as performance of the models we are trying when we measure using the metrics that are available within the model we are working with we are trying to work hand in hand with actual like clients who are giving us this data for the fields so then we can use data from the fields to verify if the model is performing as well as it's claiming to be performing on paper. Another question is have you tried Timnit, Gebruz, Modocad's approach for documenting model intended use and shortcomings? No, actually I am just hearing of it now so I think I'll note that and yeah that would hopefully be a useful resource for us. Thank you.
|
In recent years, the password to get into the club of the “cool kids” in technology has been Artificial Intelligence, also referred to as AI by the “it” group. AI has grown greatly in popularity and application with Geo Gecko also recently jumping on the train by starting to work on some Machine learning models which are a subset of the great AI. We have been building models to identify different crops using sentinel 1 and sentinel 2 images. This work has given us a front row seat in the implementation of the much-glorified machine learning algorithms. It is from this position that we are able to discuss our insights in regard to how “intelligent” this subset of artificial intelligence really is. Also having experienced the non-romantic side of machine learning (spoiler alert) which is data accessing, cleaning and preprocessing, we will discuss these in depth, alongside the break throughs we made to overcome them, and the recommendations that we have for the newbies. We intend for this talk to give ML enthusiasts a quick dose of reality so that they can take off the training wheels and get to know what really happens in Machine Learning.
|
10.5446/57430 (DOI)
|
Naomi Bates, who is the Songs of Adaptation Project Director and Associate Professor at Future Generations University. That is quite an introduction. Naomi is going to talk about bioacoustics and machine learning for avian species presence surveys. Naomi, the floor is yours. Thank you, Steven. Yeah, Future Generations University, no one's ever heard of it. We're a small university based in the US with a global student body, and we focus on applied community development. And so the project that I've been working on, the Songs of Adaptation Project, is, oh, just a minute, I'm going to try to forget how to use screen share here. The Songs of Adaptation Project is a global project that is community-based. Let me know if you can't see my screen for some reason. So we provide a framework to monitor ecosystems and biodiversity around the world, and we're trying to make this science accessible and locally relevant, especially in the face of climate change and vulnerable communities and vulnerable ecosystems around the world. So we partner with local stakeholders and have globally consistent data standards and analysis methodology. But again, we're providing a framework and the questions that we ask of the data are driven by local partners and local knowledge is incorporated. So that is interpreted in the local context for those applications. So currently we have 22 monitoring sites in the US, Bolivia, Uganda, and Nepal. And we're using bioacoustics to try to understand biodiversity in our changing world. So why bioacoustics? Well, looking at very remote regions like this area in Nepal, the Makaluba National Park coming off of Mount Makalua, the fifth highest peak in the world, it's really difficult to monitor in these ecosystems. They're very, very remote, harsh conditions, but acoustic monitoring can allow us to continuously monitor large areas for avian community composition. And we also get mammals and insects, basically anything that's making noise. I'm going to focus here on birds. And why birds? Well, they're very well studied. They can be sensitive to environmental changes such as climate, those from climate or anthropogenic changes. And because they can fly, it allows them to respond quickly and move. And they have high vocal activity. So when they make a lot of noise, it makes them good for bioacoustics. And with this bioacoustic data, we're developing a baseline to understand what's where now. And in the short term, we can identify species of interest for ecosystem protection. But in the long term, then we can begin to assess how these species are responding to climate change. Big data is a necessary component of this project because these bioacoustic recorders are collecting about a terabyte of data per recorder per year. So we have over 50,000 hours of bioacoustic data collected. And so to manage this, we built Earthhertz, which is a streaming interface at songsofadaptation.org slash data for this data to stream it without having to download it. And this allows us to have global collaboration. This is built on open source software. And within this interface, we also are using artificial intelligence and machine learning tools to analyze the big data and identify specific species. Talk a little bit more about that later. But this allows us to process thousands of hours of data in just minutes. So the process is we collect the data, we listen to some of it, and we can work with local experts to identify species. Then we cluster the data into similar looking data. We label the species in each location. And we also have negative labels for the model, such as insects, winds, human activity. And then we train the models. So these are tensor, these are built with tensorflow. And this is widely used for image recognition. So instead of working with the audio data, we're actually working with mouse spectrogram images, using convolutional neural networks. And as I mentioned, mouse spectrograms. We've built this on Microsoft's cloud services, using Azure Data Lake, Azure Data Science virtual machines, and our programmer and data scientist has split this to work on 100 virtual machines using Databricks, which allows it to run much much faster. So these machine learning models identify the probability of occurrence of species in the terabytes of data. So we feed in years of data, and we can get out a time series of when a particular species is present and where. So these results can then be used to inform management decisions and address the needs of local stakeholders. So this open source tools were built on other open source biocoustic programs, and the user interface can also could also be used with other model architectures. We built it very flexibly so that it could be potentially used by others. And so the vision is within reach to create tools for stakeholders to track climate change impacts for locally driven decision making and informed adaptation. So we really appreciate the support of Geobahn and Microsoft AI for Earth and our partners around the world. We also have teams, folks doing fieldwork, scientists, local scientists, community engagement specialists in Nepal, Bolivia, Uganda, and USA. So thank you very much. We'd love to have you join us on the platform Earth Hurts at songsofadaptation.org slash data to listen to the data label. We'd love your feedback and feel free to get in touch with me. Brilliant. That was perfectly timed and really great. And it's really nice to see all these projects coming through from people who've had the various grants come through from our program. So glad you're working with our Geobahn colleagues. And yeah, the problem is don't have enough time to do all these things to see all these. There's so much going on. So thanks again. And now I've also heard of Future Generations University.
|
The complex realities of changing climate and biodiversity are imperfectly understood. Bioacoustics is a conservation tool, going where human ears cannot stay and listen. Locally-informed machine learning analysis leads to big data insights, empowering informed decision making. Networks of bioacoustic recorders in some of Earth’s most biodiverse and vulnerable regions (near Everest in Nepal, Madidi National Park in Bolivia, and the Chesapeake Bay watershed in the United States) are bearing witness to a changing climate. More than 1850 days of audio data already collected provide a powerful dataset for studying species distributions. Machine learning (ML) turns this data into information to understand climate change and biodiversity. MLmodels are being trained for a dozen species in Nepal, Bolivia, and USA. Analyzed data show location and time of species vocalization. Modeling can expand rapidly as labeled data is collaboratively created by local experts. Preliminary results from Nepal show that a rare bird species was identified 1,000 feet higher in elevation than previously recorded: probable proof of concept that bird species are migrating uphill with changing climate. Bioacoustics is a valuable tool for species population surveys and biodiversity monitoring.
|
10.5446/57431 (DOI)
|
You done<|cy|><|translate|> and Hi. Hi Marco, how are you? Fine, thanks. Are you in Italy right? Yes, I am in Italy. So in the evening after lunch? Yeah, it's half past three. So it's after lunch, but enough after lunch to be already after the nap time. Yeah. So I think we'll also mention Open History Map, right? Yeah, exactly. Oh, good, good. And I think everything is ready. You can upload your presentation. Yeah, I can share about uploading. Okay. Oh, it's just sharing. In fact, it's just sharing. If you click there and share, it will appear here. Perfect. So your second monitor is on top of the other? Yeah, exactly. It's a classic. It looks like I'm looking at answers from gods above, but it's not. I don't know you are near the pope and hearing the voice of... Where are you from, Marco? Bologna, Italy. Bologna? Yeah. Yeah. Good, good, good. Nice city. Okay, let me share my screen here. Have you been here in Bologna? Yes, yes, I've already been there. Oh, great. It's quite rare because usually people just go to Rome, Milan and stuff. Everything is good. Venice, yeah. And Venice, yeah. But those cities in the north is quite rich and the cities are really, really nice. But Bologna has a very much history and so on. Yeah, absolutely. One reason is the agreements of Bologna are very well known in all universities in Europe. So Bologna is... Yeah, that's true. Yeah, we have your presentation here. I can put it on the stage. Perfect. Oh, open history map is there. Yes. Good, good. And we have everything ready. So in a few seconds I'll give you the stage and you can introduce yourself and make your presentation. Yeah. Marco, thank you for being here and in time and everything ready. So the stage is yours. Have a nice presentation. Thank you very much. So hello everyone and... Sorry, I don't see the screen anymore. Okay, hello everyone and welcome to my third presentation this year on catching time changes in maps on open history map. Let me give you a very fast intro to open history map per se. Open history map is an organization we're trying to create tool chains and tools for digital humanities basically to represent data of the past, about the past with modern tools. So to do that, sorry, to do that we have realized, we have generated a system, an infrastructure and a visualization. And so we have several elements to show. So that's the places where cities are, city details and also countries level details in various kinds of scales. So this creates a huge amount of data. It's several gigabytes of data that have to be pushed to the end user. Based on the levels of detail the user is looking at and the area he is looking at. So this is quite complicated to do. And for this reason, we started looking at... For this reason we had to develop an architecture that gives us the ability to do several things automatically. The whole architecture I discussed this yesterday. Today I'm looking just at the part where we look at the data itself, which is this area and it's composed of a front end where the map is displayed with Mapbox GL and the tile server that does just the heavy lifting of showing off the tiles. The service that enables the users to and the importers more than the users to import the data into the database that is a post-JIS database. And then we have the caching system. To explain you the caching system and how we developed that, I have to show you the details about the database. Specifically the database is based on, as I said, post-JIS and very reliant on the partitioning system offered by a post-JIS. Precisely the information is collected with several sets of information connected to the single dataset. It's not based on the same infrastructure and the same idea of OpenStreetMap. Specifically we have two tables, one of the items that are stored in the system, which have two data, two information that represent the moment and the two moment from when and to when this item is relevant on the map. And this fact represents, is represented by two floating point numbers. It's not a time, it is not a data time object, it's not a timestamp, because post-JIS introduced negative timestamps just a few days ago. This is post-JIS 14. One relevant element is the layer and then we have the properties, that is a JSONB object that represents all the details that we want to use for front-end visualization and also for data management. Then we have the author and the geometry is not stored within the item, it's stored within the hash, it's stored as a foreign key created by this hash element that represents the original hash of our geometry. And then within the table geometries we have several copies of the same geometry with various levels of zoom, so that we can recreate the data we're looking at based on the zoom level and the moment in, based on zoom level, moment in time x and y. And x and y are, obviously, zoom level x and y give us the map vector tile and the date represents, it gives us the specific item to look at. As I said, we rely very much on partitioning, meaning that based on the structure of time or the end of production of data over time, we have a, for example, we have a very sparse bar past, meaning that we have one only partition, for example, from 8000 BC to 4000 BC. Then we have a smaller partition from 4000 to 3000, then 3000, 2000, 2500 BC, 1500, 1000, and then we go even in smaller steps up to almost 50-year steps within the last, no, almost five-year steps within the last 50 years, and 50-year steps within the last 200. Because obviously the amount of data we collect and we have managed to collect and use is absolutely distributed in a very irregular way. And this means also that, for example, for geometries we can rely on the fact that all points are points, so we can zoom around however we want, but points are points. On the other hand, the various zoom levels, we have them as well partitioned into several partitions. This results in a very huge amount of partitions, but through SQL alchemy and an automated partitioning script we can generate these partitions automatically based on the layers and the configurations of the zoom levels. This configuration is reused in many parts of the whole infrastructure. This creates for us, for example, a huge amount of data stored in our database based on the various levels, because as I said, the geometries are repeated for the various levels of detail, so it's normal for us to have almost the same size of the tables for the high-level zoom levels and the low-level zoom levels. And this creates obviously a problem with uploads, because it takes time to generate the data. Now this creates also a – anyway, our infrastructure is based on the four volunteers that are working on the whole system, so it's based on not much – let's say it's not paid by huge funding, but in fact it's not paid by funding at all, so it's based on a volunteer-based server. So the architecture is – we have several problems when visualizing the data. By the way, the map is available here. If you want to share the link, it's map.openhistorymap.org, where you can find our details about our project. Anyway, this creates a problem, because when you look at specific moments in time, Europe in the 17th, 18th century is a huge mess, is a horrible mess, is a beautiful mess, honestly, but anyway it's a mess, because there's a lot of data, a lot of changes, a lot of things going on, and this takes up to a minute to download, and the browser ignores the downloaded data, the scripts fail, and everything slows down, and the users are unhappy, and we are unhappy, because our users are unhappy. So why not create a cache, a caching system? So let's go to the drawing board. How do we call our layers? We basically call our layers via a ZXY PBF query. Only we added the date element in the middle, but the date is a float, and this creates a whole lot of problems when trying to cache all these elements, because this is a list, and that's not a problem. This is integer elements, and we know that very well we can cache them any way we want, but this is a floating point, and this makes things horrible. So how could we manage that? And we had this horrible problem that we have a layer, we have the X and the Y, and the elements could be potentially continuous, but looking at it with perspective, we understood that in fact there was a slightly different issue. We had several items that had an OHM from, and an OHM2 for the same item, and an OHM from, and an OHM2. So we had within a potentially continuous dimension, we had very specific moments in time when this dimension changed, and it was possible to preview these changes in our structure. So what happened then? We asked ourselves, why don't we just consider these elements simply to be relevant or irrelevant? And once we look at it this way, this means that basically the map itself changes only in these instance, which means that we just need to store these instance and recreate a new way to manage this transformation. And so we tried just to look at it in this way by saying that we have a transport, for example this is the key, and this is the key, and this is the value. We have two caches basically, we have two key value stores. One key value store that tells us that we need to get the relevant dates for this specific XY set for this layer, and it gives us back a list of valid dates, of a valid sequence of dates. Within that valid sequence of dates, we choose the one that is just before the last relevant change before the date we're looking at. So that this query becomes this specific MBT tile, and then we can return this style to our user. Does this create repetition? Absolutely, but it's part of the caching game. Does this, is this efficient? No, it's just the first draft of our idea of a caching system for time-space information. On the other hand, we had this beautiful thing, we love partitioning, and so we're trying to use partitioning as well for this infrastructure. And so we developed OHM cache. It's based on FastAPI in Python. It's based on level DB. Is this enough for partitioning? Somehow it is, and we think, but that means that we have to do partitioning through load balancing and routing. So it becomes more of a system and operations matter to manage the caching of the whole infrastructure. And we're trying to think about something even more fun with the people from the University of Bologna, which is using Cademlia. It was, I'm old, and so I remember when we used Cademlia for downloading music and movies when internet connections were not that fast. But the idea is to go to try at least to see how it works with a distributed hash tape. The code is not yet uploaded. I was trying to upload it this morning, but some things came up, and I had no time to do that. But I will be uploading it in a few minutes after the presentation, and it will be in GitHub, open history map, OHM cache. And well, thanks. We have questions, details, anything else. Very fast presentation for the Friday afternoon. Your mic is off. Yes, thank you. Thank you, Marco. And thank you for your quick presentation, although I think it was clear. And it is complicated for people on the humanistic side. It is not easy to explain all this stuff, I think, to them. You cannot show the behind the scenes. No, it is absolutely not. It is not easy, but it's always fascinating to see them come around and understanding side effects of getting the good part of the concepts behind these tools. For example, as I explained yesterday, and as you might have seen in the first slides, there is a lot of data about the past. And let me just share with you another image, just because we have still 15 minutes. Let me show you a few additional elements on the project. Here, let me change the screen. If you can show the screen I'm sharing right now. You can see this is not yet connected to the cache. And you see the central Europe is very slow to download, because obviously the changes in time have been so enormous and the data to be downloaded is quite complicated. Because on the other hand, we are trying to keep track of several kinds of data sets. For example, while having almost a lot of data from a national level, national size data sets, we also have, for example, for Bologna, where I am, the city structure over time. And we can see how it changed from the 14th century to the 17th and to the late 18th century, where we also have almost all buildings that were in and around the city at the time. It takes a few seconds to download. But for many buildings, we also have additional data as well, meaning that we can also go exactly. We also have data about the height of many buildings. We had the number of levels that are there in a specific building. For some of them, it's obviously just reconstructed from archivistic information. For some of them, the buildings are still there. So you see the amount of data we have to jiggle with is quite a lot. So a caching system becomes every day more important. And for this reason, we started working on this. We started discussing this with small tests. It works, but it's not stable enough to have it yet up and running. But we will try to have it running by the end of October. And as always, and as every other tool created within this project, it's always going to be open sourced and available for anyone to work on, play with, and reuse for any other kind of usage. So Marco, while you are showing this, and thank you for showing this example, this detailed example from Polonia. Are the towers still upright in the 19th century? Yeah, sadly, the hate of the towers is not within the data set we use as a baseline. Yeah, it's too bad. Last, in Dar, two years ago, we presented a project where we were using the 3D models of the real towers as elements here. But the data from that project is not yet integrated into this new version of the interface. So as always, everything is working in progress. OK, so the first question, the most voted question, is about partitioning postage. And how do you configure this kind of partitioning? Yes, let me show you. The partitioning in post-GIS is something absolutely beautiful. And based on the, we're using a set of, how do we call it? Let me, I'm looking for the files here. OK, here. Basically, I'm generating the models, the data structure of the partitioning based on a script that is also open source, or if it is not, it will be when we release all the data. And basically, it tries to generate a partition for every structure within the labels, for every label, for every layer that we have to use, and for every structure we have to manage. For all of the time partitions we are using. So as you can see, generic minus infinite minus to 4,000, minus 3,500, and so on. And so with this partitioning system, we are able to generate something like, it's 15 layers by around 50 time blocks. So it's a huge amount of layers to be generated, to be a huge amount of partitions to be generated. And it's all done with SQL Alchemy. OK, OK, on the Python side, right? Yes, exactly. There is one question about timescale DB plugin. So the question is, have you looked at the timescale DB plugin? Not enough, honestly, not enough. Because the answer to that is yes and no. Because we will be using timescale DB for a part of the project that is based on real-time serious data. The problem with this is that it relies a lot on the fact that the problem is exactly the fact that we are not using it. We were trying to use it with time buckets for geographic information. But it was somehow inefficient on our side considering that we have elements that have to go beyond the simple. We have potentially objects that go through several time buckets, and so it was quite problematic. But we will try to use it again, because, again, as I said, we will be using it for sure for a side project that is already ongoing with the University of Bologna that has the aim to generate something like a data warehouse for indexes, for several kinds of objects of the past, normalized through Wikipedia, and with data IDs so that we can look at how was the GDP of ancient Egypt and so on. We have also another question, and it's about the open historical map, which is a similar name, but is this project similar? No, it's similar. We have almost the same object, and we had a very interesting call in a few last month, because the object is similar, but we have a different approach. Well, open history map, open historical map relies on, yeah, it's a mess, because we almost started in the same period. So we both started and we both grew a little bit towards the larger data sets, and now it's too late to change it. And so we discussed the whole thing and said, okay, it's too late, but as long as we're discussing almost the same thing with different names, who cares? It's data. If you want our data, if we want your data, it's always data. So anyway, the approach is slightly different. We are, with an open history map project, we want to have more control over the academic aspect of the data sets, and so we have the data index that you can reach at index.openhistorymap.org. And with that, we want to try to manage the complexity of the data and the data quality with a separate element. While open historical map tries to do that with the open street map approach, which is slightly different, more community-based, more structure-based. Our approach is more towards information that is verified and controlled by academia or academic procedures. We're not directly importing data. We're importing data for people who generate data. And on the other hand, the other great difference is that we want to track also ephemera information, meaning where was somebody at some point in time? What were the movements of one chip over time? And so with these information, this is our information that open historical map does not have and does not want to have because it's open street map of the past. And for that reason, we will most probably in the next month try to integrate that data from our database, from our interface, from our system into open historical map so that we can have a shared structure for event visualization and something like open... Let's say we also have something like historical street view where we look at historical photos and paintings and show them within their position in time and space. So that creates a whole set of information that, again, open historical map does not have and does not want to have while we try to use it for digital humanities and so becoming a more structured approach. Yeah, thank you, Mark, for this explanation about both projects. And we will end here this session. Thank you, Mark, for the presenters for being here and sharing their knowledge. We have the OSM General Meeting in a few minutes and you are invited to participate in the life of the association and the board is there and please come to the general meeting of OSGEO. Thank you all and see you around.
|
Open History Map aims to display the changes that happened in the past both from a political as well as a social and topographical point of view. the storage of these multi-dimensional changes has an enormous impact on the way the map is visualized. For this reason we needed to develop a caching system that was on one side flexible enough to display a non-quantizable dimension (time) and give us the possibility to pre-cache the whole system into a distributable package for easy local usage and possibly fast update of the package. The caching system itself is backend independent and defines a process to simplify access to a mix of discreet (x,y,z) and continuous (t) identifiers for independently variating geospatial datasets.
|
10.5446/57432 (DOI)
|
Hello, thank you for your understanding and patience. So we are on time for the second presentation of this session. I'm going to leave this stage, the screen to Mary Anna Balvola, based in Tanuneriv, Madagascar. And Balvola works at the National Geographic and Hydrographic Institute as an Information System Officer. She's passionate about learning a variety of geospatial techniques and technologies. Also, she has been a promoter of open source and open data, and the title of the presentation will be the Case Study of Data Storage for Preservation of our Archiving System at the National Geographic and Hydrographic Institute of Madagascar. I'm sorry for my stature, so... Mary Anna, welcome. Okay, thank you everyone for giving me the opportunity to share with you my presentation today. Related to the use case of the Case Study of Data Storage for Preserving our Archiving System at the National Geographic and Hydrographic Institute of Madagascar. So first of all, let me introduce myself. My name is Balvola Magyana. I work at the National Geographic Institute of Madagascar since 2014. So my story about phosphor G has become since 2017 when I was in Belgium for training in GIS, 3GIS. So I attended the phosphor G in Belgium and I participated with the interphosphor. And after that, I became really interested, I got the promoter in phosphor G. So I was a speaker in this huge event in 2015 in Tanzania. And I became a charter member of the source since Tanzania and finally this year, I have chosen to be a leader and a special leader to be in this online edition. So I'm very happy to be part of this community. So the outline of my presentation is about the context and state of the art. After that, when we have chosen phosphor G for Archiving our system and what type of technical methodology have we used with tools. And I will share the lesson learned during that journey, even now, till now we are working with the need and finally I will present the first piece. So my exhibition is the National Authority on Geographic Information which will contribute for sustainable development, that is our vision. And our main mission is to provide the difference and the opportunity to spatial data and the basic topographic maps for all users, especially in public decision making in my country. So there is a presence of multi-promenon interoperable data related to location. Obviously with different formats. So we have lots of data, we've been CD, DDT and external track in paper as well. And another problem is that we can find the relevant information which is stored and who to consume, that are all the users that we face actually. So as you can see from this slide, there is a lot of CD and a packet of physical support. And then we, according to the start of the air hat, we found a lot of database separately from each service within my institution. There is a vector database with the scale, topographic database, geodesic database and clusters with georequiline scan and raw data. We also handle the old photos and orders. As you can see that we are a big island with 592,000 square kilometers. That is our big data that we have to manage. So why do we have to fill our national heritage data? We can say that our data is a national heritage because we are the only one to store all of the different Facebook maps here in Madagascar. So we've all the physical support and there is no central storage. Again, if you have one storage in your institution, that is why we need to preserve all of our stuff. So why post-cortege? We have known the scarcity of resources in general. We know that the license is very expensive and thanks to the open source to give us opportunity to support our system. That's why we choose post-cortege and it is not definitively but we work with it progressively. So this is the technical methodology that we follow in this archival system. First of all, there is a data prepared nice with technical description. Meaning that we have to adopt a standard related to all of our existing data. Also, there is data collection from each service because we have many service in institutions such as photogrammetry, cartography, data base services, hydrography and so on. After that, there is the technical installation and configuration followed by the database creation. And finally, there is the data integration and database update. So the tools that we are using actually is a QGIS with some plugins. In here, I think we contribute for useful plugins such as the PgMetadata. Because it is very helpful to insert and treat it all of our metadata. And for the database, we use the Post-QGIS with the spatial expansion for the GIS. And with PgAdmin, the interface user, there is also the SH3 to PgSql with Q for interface as well to import and export the Shake file to the database. So as we can see that it is a basic tools related to the Post-QGis. But I can say that I am a promoter of Post-QGis and I can say that if we have administration especially in developing countries such as Madagascar, we can use and adopt this method to store and to uncover their system. And there is much or a little lesson learned from this journey that I can share. I want to share with you. First of all, about the data, there is a semantic issue that we need to improve. For example, you can see from the screen that there is a semantic issue. For example, the name of the attribute and there is also some issue related to the value of that attribute that we have to improve. Also, there is a topology error from our database. For example, this shape with one people in line. There is an issue related to a snap team. So we must define the snap team distance because two liners should be here. Because it is just one drop. Another lesson learned from that also is that I can say that we have gained more space from the shapefiler to rows. Because it is the fact that making our detailed shapefiler increase in space. In addition to that, we have to check also the understanding between technicians because sometimes the development understanding relative to storing the data is an arm of the same. So we have to communicate with each other our way. We have to store with to centralize all of our data and the final data. Also, you can begin from the local environment because it doesn't really need a deep, for example, a separate cell for something like that. But you can make it locally. So just begin with the tools that I will say later. So to conclude, and the perspective is after the archiving is all of our data, we can think of what we value that to this system. Because we can create another service to make it another value. Also, we have to improve our data and to create an automation work, for example, to the fact that populating the database, for example, to not make it 980. And last but not least, we have to improve our internal communication. So that's end up to my presentation. So I call for everyone, especially us in developing developing countries to use this open source because it is very beneficial for our country, especially for our institution. So thank you very much for your attention. Thank you. Thank you, Marianne, for the presentation and presenting the work, the very meaningful and important work that you are running and having. So we have a question from the venue list. There are three questions from one sentence in fact. So how the migration effort finished? First part of the question. Sorry, can you? I cannot hear you. Okay. How the migration effort finished? I mean, have you, have we're able to finish all the work or what stage are you in? Can you hear me when, can you hear me when I stop my video? Marianne, can you hear me? Yes, please. Okay. So the question was, have the effort, have the work finished or if it does not finish, what stage are you in in this in this work? It is not yet finished. Because as I can say that we have a lot of data. So it is still, we are still populating our database. And as I can say that that convincing some colleagues related to the fact that to store the data, that is another real issues as well. As you know that it's not easy to use the open source, open source directly to the migration. Okay. And do you have estimation of, do you have an estimate of when this work will be completed? What time are you expecting to be completed? I think it is a very, very work as a, even now we are updating our work database. So maybe I don't know when will it end because it depends on the way we update our database. So after finishing it, we have to upload it to our work. But for the world data, you can finish it in this year. Okay. Okay. Thank you. Yeah. It is very relevant and it is very hard sometimes to convince, especially government officials in open source and open source tools and somehow they see those tools as unreliable. But yeah, I mean, you have demonstrated very well the importance and the ease of application of these tools in the process. So thank you. Thank you very much for presenting your work and thank you very much for presenting your progress. Thank you very much. Thank you. Hoping that you will have successful results to present next year. Thank you. Bye. Thanks a lot. Bye. Okay. So if you have any follow up questions to Mary Ann, you can find her to Ben Ullis and get in touch and have a chat as well as the other speakers that we will have. So we are going to have again a 10 minute break to prepare for the next presentation. So please, please, please stand by. We are going to have interesting presentations coming.
|
So far, we are storing and backing up with the aim of preservation our national heritage numerical data such as vector and raster databases, cartographic and geodetic works, old photography and other documents; whatever their nature and their physical supports are (numeric cartridge, floppy disks, optical magnetic disks, CD-R and DVD-R). In fact, resources are scares and Open Source gives us the advantage of using these resources more efficiently; we are taking advantages from them to make our organization better with QGIS and PostgreSQL/PostGIS to migrate from Shapefiles to rows. Our methodology might be elementary but as far as we believe, not only we would like to share our experiences from our lessons learned; but also developing countries might have same problematic as us regarding to how to preserve their old heritage data. At the end, we would like to present our future long term objective related to the creation of a metadata portal for rational management and optimal use of this archiving system.
|
10.5446/57433 (DOI)
|
Hello, Flipe. Hello, Cristiano. Good morning. Hello, good morning. Okay, I can hear you. Cristiano is a programmer with 13 years of experience in designing and development digital products and systems. And Filipe Alves is a transport engineer focused in Houston Naval's molds and, of course, OSM makers. So, guys, the room is yours. You can start your presentation. I want to... I need your presentation. You can share it with me. Just a second. Just a second. Okay, I see it here. Let me... Okay, can we start? Yeah. So, good morning. Buenos dias. Vamos a hacer en... Cristiano, you can launch your screen. Okay, it's better. It's better. It's better now. Great, thanks. Vamos a hacer en inglés porque nuestro español no es super bueno, pero si quieres hacer comentarios, preguntas después en español, por favor, sin problemas en português, también. Okay, so let's start. So, as Narselho has said, I'm a developer and product designer. Actually, I work more as a designer nowadays, and I'm a cycle of activists, a maps lover. In my free time, I like to do open source projects. And I'm a transportation engineer, as Narselho said, but also I am the director. I'm one of the directors at Brazil Cycle Union, UCB, that is a civil society organization advocacy for cycling. Nice. We're going to talk a little bit more about UCB in a second. So, our team, I think it's interesting to comment like, we are from Brazil, and Brazil, everyone knows it's a very, very big country. And in our team, currently, we have people from all extremes of Brazil. Like, we have me from the very south, and Felipe from Fortaleza, and people from the middle. But still, there are lots of regions in Brazil that we are not, we don't have people in the team right now, as we hope we can have in the future. So, a little bit of context on how everything started, why we started CycleMap or CycloMap, right? So, in Brazil, we didn't have a centralized solution for mapping Brazilian city cycling infrastructure. We needed standardized and collaborative and open data about cities so that we could easily compare cities and download the data and the researchers and people that need the data can download easily, process the data easily, etc. And also, for advocacy, like having all this data standardized and centralized help us measure and visualize opportunities that can impact society in improving urban mobility. In our case here, focused on cycling, right? So, this is how the CycleMaps in Brazil looked like, and I'm pretty sure probably everyone here can relate to this, because I guess in many countries, people have been using many different kind of tools. People from like doing PDFs to be printed of the bike maps, but even using some very simple software that is very easy to use, like Google My Maps, are using more professional software as KGs, right? And you want to talk about a little more? Yeah, okay. So, these are true civil society organizations, UCB, it's the one I am part of, and ITDP, it's the Institute for Transportation and Development Policies. So they united to bring the solution of have the standardized bicycle maps for all of the Brazilian cities. That's right. And back then, there was this platform as well, which was developed by ITDP, which is called Mobilidados, and they were starting to develop this platform as well. And this is more a platform for the data indicators and metrics for urban mobility in general, and they needed the access to this kind of data, so Ciclomaapa was born like twins, right? Mobilidados and Ciclomaapa were born a little more in the same time. So I'm going to talk a little about the cycling infrastructure so that everyone knows at least a bit about it. So we have different cycling infrastructures in different countries. Here in Brazil, we decided to follow our technical guides and technical manuals, and to classify our infrastructure in these four layers. From the left to the right, the left one is the least desirable, and the right one is the better. I'll talk a little bit about each one. The first one is just a shared space on the sidewalk, so we don't like it very much because they force the cyclists and pedestrians to share a space that usually is not much. The second one, we call it cycle routes. People in North America know it by sherros. It's just a shared space in a common street just with signals to indicate that this is a route for cyclists. The third one is the cycle lanes. It's the painted lanes on the roads that are exclusive for cyclists, and these are really different from city to city, but in most cases, they have some small segregation elements, just not the painted lines. And the better one is the cycle tracks that are the fully segregated tracks. So what we had to do was to translate the tags and keys from OpenStreetMap to these layers. In OpenStreetMap, they don't call with the same names that we give them or that we call them. So we have a lot of different keys and tags for each one of these layers, and we show in Ciclomaapa also some other layers, like the low speed roads, the off-road tracks and paths, and the places that are prohibited to bike. And besides these layers, we have other layers of points of interest, like bike shops, bike parkings, and bike sharing stations. These are also layers, and we have some other elements that appear on the map that are useful for cyclists, like wire fountains, public restrooms, and air pumps. So before we go a little deeper on how everything works, technically wise, I'm going to do a very, very quick demo here for you. I think you can still see my screen, right? So this is Ciclomaapa live in production. Let me increase the size here of the screen. So it has all the layers here that you can activate and see more information about it. It's fully interactive, and the level of detail will change depending on the zoom level, and everything is interactive, so you can click and see more information about it. We don't show everything that comes from OSM, from Opus FreightMap, but we translate and filter some most relevant information. You can change cities here, so we have any Brazilian city available that you want. So Rio de Janeiro, for example. And there is also this new feature, which was the most recent one, that we are starting to develop those metrics. So we have some metrics that are calculated offline, which is PNB, which is the, we're going to talk a little bit more later, but the length of different kinds of structures and the amount of points of interest, and etc. You can change the base map. So this is just to give a brief overview, and then on the questions, if you want to see more, you can also access the link already. So let's jump back to the presentation. So how it works. So I'm going to talk about these three main layers. We have the Mapbox base layer. Mapbox is a framework, a library for rendering maps, right? And we have the OpenStreetMap data with psychopaths and points of interest. And on the top of that, we have metrics and controls, right? So talking about Mapbox, Mapbox is a really great library for visualizing rendering maps, web-based and has native as well. And they have this great tool, which is called Studio, where you can, everything is OSM-based, OpenStreetMap-based, all the data comes from OpenStreetMap. So you can leverage all the data that, not all the data, but most of the data from OpenStreetMap, and you can customize the look and filter, whether it's going to show or not. It's a great tool. Then on top of that, we have OpenStreetMap. So we created a kind of language for describing the different kinds of layers that we have on the map right now. So we have bike shops and bike parkings and the cycle roads, for example. And here you have the tags that Felipe mentioned before. And we translate that into a query language that OpenStreetMaps use, which is called Overpass. So we translate that and we query Overpass for this data to put it on top of the map. We also use Firebase, which is just a database, very easy to use database, where we store this data because Overpass is a little slow. So we don't do that while the user is interacting with the tool. So basically, we try to get data from Firebase. This is the web app. And if it has the data, we very quickly get this data. Otherwise, we go to OSM. And there is also a hidden button that you can click and do a manual update. If you are like editing the data in OpenStreetMap and you want to see this reflect on SQLMap. And finally, on the last level, we have the metrics panel. And this is very simple. For the point of interest, we just use JavaScript to count how many of them are. And we use Turf, which is an open source library for different kinds of gels, spatial calculations. So they help us measure the length of the structures. And for these other metrics, since they are calculated offline, we use our table, which works like a database, but also CMS, a content management system, which our team can go there and edit those numbers and we just pull from the numbers. So this is how our table looks. And we have comments feature as well that you can leave comments on the map. And we also use our table here so you can see an example of how our team can see those comments. Okay, so let's talk a little bit about some research that we are done using SQLMapa data. First here, you have some access data. So there are 320 Brazilian cities that already have been accessed, an average of 900 users per month, 300,000 page views, and a really good rating for the users. So this is one of the indicators that we show. It's an indicator created by ITDP, the PNB, that means people near bikeways. And it represents the percentage of the city population that lives within 300 meters of the bikeways. So they get the data from OpenStreetMap by SQLMapa and cross with the data from the Brazilian Institute of Geography and Statistics. And so they calculate how many of the population lives near the bikeways for all the state capitals of Brazil. And also they have in the population data, they can do another research like they have seen that in 20 capitals, the percentage of black women near the bikeways is lower than the population average. And in 17 capitals, the percentage of households with higher income is two times higher than the percentage of households near the bikeways is two times higher than the households with lower income. And this is another research. This is from a government office, IPEA, that means Institute of Research for Applied Economics. And these measure access to opportunities. So they measure access to ease of access to healthcare facilities, to schools, and to job opportunities. And they do it by three means of transport, by bus or public transport in general, by food and by bicycle. And to do this by bicycle, they use data from CICLMAPA. So this is some media coverage that we got. You have, for example, news from regular media or from specialized blogs and OSM Weekly and all kinds of things. Okay, so wrapping up, talking a little bit about the future of CICLMAPA. Currently CICLMAPA only works in Brazil. And we have been developing it, improving it for two years, I think. And we think the next steps, we have lots of ideas to improve it and add more features for Brazilian cities. And we are already talking about ITDP Global, which is the global part of ITDP, to make a new version of CICLMAPA, which will be international. And as you can see here, we are looking at the CICLMAPA full-bundositis, which is not something that you can do on CICLMAPA if you go to the website right now, because we filter only Brazilian cities. But it works. It has some performance issues and some things that we have to fix so we can make it international. And that's something that we're going to be working on the near future. And we also want to add more like country-level features. So different countries can have different definitions for those layers. As we know, some tags in OSM can change depending on the country. Different countries have different ways of using them. So we're going to develop this kind of customization so it can work globally, but not necessarily with the same solution. I think that's it. So you can visit us on CICLMAPA.org.br. We have all the code in GitHub. And you can also access more documentation on this website here from UCB. We're going to be responding to questions now. But you can also contact us via email if you want. I think that's it. We're going to leave more time to questions. So guys, we have 10 minutes for answering the questions. We have a lot of questions for you. The questions is coming now and I have to put it here in screen. So the first, I think you just answered, but it's all in Brazil seats. And what you take to extend this for all of the world. Yeah, OK. I think I answered a little bit, but I can go a little deeper. So it's very tricky because different cities have different ways of mapping. Some countries have different ways of mapping. And it was already very tricky to make it work for all Brazilian cities, especially the biggest ones like Sao Paulo, Brazil. Sometimes we have to do some local optimizations for it to not be too slow and etc. But I think that's it. I think it's more like a technical issue of having enough time and team to make it work for other countries. We don't have the servers as well. We cannot handle currently thousands of people accessing it. We wouldn't have the budget for that, but that's something that we are working on. OK. The next question is about holding. The cycle map that works in holding? Yeah, we don't have this feature right now. That's one of the most requested features, I think. But I think it's important to point out that we have been focusing on a specific kind of user for the platform, which is not the common day cyclist usage. We have been focusing on people that need access to the data and wants to study data and wants to download the data. We are trying to provide features for these kind of people. Maybe in the future we can develop features like routing, which at least we think are more useful for the day-to-day cyclist, which currently already has other solutions, proprietary solutions, of course, that works very well. Don't know if you want to add up? No, yeah, I think it's OK. I guess it's a sensitive. OK. The next question is about the support in the cycle network in your packaging, if you have added this option in your application. We have this feature already. When you are looking at the city, there is a download button on the top right corner, which just says download, and it downloads all the data in Jiao Json, which we hope is a great format for everyone that wants to study. But if you need other formats or need other things, please contact us. OK. The next question is about the data update. Can user suggest or make change? Yeah, I want to take this one, Filippe. All the data is from OpenStreetMap, so anyone can edit the data there. After you do, you just come back to SQL MAPA and hit the update button, and it will update the data for the city. If you want to suggest change in the application itself, you can contact us, and we can have a talk about that. Just to compliment, if you don't know how to use OpenStreetMap, we have tutorials that Filippe has created. And also, if you don't have time to see the tutorials and et cetera, but you just see that there is a little problem on the map, we have a very easy to use tool that you can leave a comment on the map, so other people that can edit the data later can see the comment. The tutorials are only in Brazilian Portuguese, unfortunately, but maybe in the future. You can do that in other languages as well. I think just to answer the question, I would like to help with the stand of my seat, how I can help. I think you're just talking about this OSM database, so it's open OSM and making your editions, but anything more about these questions? Yeah, like if this person is from Brazil, it's very easy, just edit the data on OpenStreetMap. If this person is not from Brazil, you can contact us to see how we can work together on this new international version of Ciclamapa. Okay, the last question is, have you noticed any changes in OSM participation since you started these projects, have these encouraged people to contribute to OSM? Can I take this one? Yeah, we've been trying to improve the co-operability of some cities that we know that they have a Ciclin network, but it's not mapped. So in UCB, UCB is a national association, so we have contacts from people from all of the states. We're trying to make contacts to people from specific states to improve the co-operability, and I'm trying to teach people how to map. So I've already managed to do that in two capitals from the north region of Brazil, that is the least mapped region. And we're trying to do this year, we're trying to do another kind of map atons, so more people could learn how to map and contribute to the cities. Yeah, and just to compliment, this is a great question because in the beginning of the project we were like, do we want to use OSM data, which is not very complete? And it's a chicken egg problem because if you don't have a tool like Ciclamapa, which is very easy to visualize the data, maybe, when we have heard people that use OSM saying, okay, I put the data there, but there's not an easy way to consume it and to be used by the general public. So we're trying to do this, make this wheel spin, and I think it's starting to happen. Okay, where's Nacelle? I'm here. Because I have a last question. You talk about hack atons and map atons, so anything made these questions is a total of double in spirit. And if you have any issues, so people collaborate during the Hacktoberfest. Yeah, that's great. We joined this last year. We have lots of issues on GitHub. We're going to try again this year. Last year we tried and we didn't have many contributions other than people fixing typos but I think it's a common thing with Hacktoberfest. People just want to be sure, I don't know. But we're going to try again and if this person is a developer, just access our GitHub and we have lots of issues that we can discuss. Everything is in English in the project, so I hope it's easy to collaborate. We have one minute. We have another question. So be fast if you can, but I think there's an important question. It's difficult to answer. Just one minute. You guys, you can share in the chat the contacts, the email and GitHub projects in the chat of the journalists for the rest of people. And I think you can answer these questions in the mail for the people because your time is up and I need to go to Nicolas and Fiorella now. Yeah, thanks, Narsaljo. And we're going to be on the social gathering right now if anyone wants to come and talk to us and chat a little bit. Thanks everyone. It was great. Great presentation, guys. I'm so proud for this project. Come to Brazil and nice to meet you in person, Cristiano, Felipe is my friend in Fortaleza. So guys, thank you for your presentation and I'll see you in social gathering later for talking more about this project. Definitely. Thank you. Gracias. Bye. Gracias. So the next presentation is in Spanish so we have one minute for change the configuration here and go back with Nicolas and Fiorella. Thank you.
|
In Brazil, we face a big challenge of not having the cycling infrastructure data openly available for our cities. We've developed the first cycling maps platform containing all Brazilian cities, leveraging the data and collaborativeness of OpenStreetMap (OSM). It's an open-source web application, free and accessible from computer or smartphone, aimed at both the average citizen who wants to know more about their city and researchers and policymakers that now have easy access to standardized data.
|
10.5446/57434 (DOI)
|
this gives them a better idea of who they are, who they are or who they actually use exactly richness and that and we will be having the talk by Stelio Vitalis, City Jason, 3D City models for everyone. Is that yours? Will you be, I don't know, you are sharing already, sorry. Okay, so, go. Okay, thank you very much. Good evening everyone, buenas tardes. So this is a presentation, I'm Stelio Vitalis. I am from the information group, Stelio Delft, and this is a presentation about 3D City models for everyone. So basically, about City Jason, how we try to use that to do fun stuff with 3D City models. So let's start with what 3D City models are actually. So, 3D City models are, it's probably a bit broken. Can you see my slides? I hope so. So the 3D City models are a way of representing the urban environment with 3D data. You can think of them as some sort of 3D shape, although they are more structured. So the main idea is that the standardized features, like we have buildings, we have roads, we have vegetation and stuff, right? And there's a standardized way of how to represent these things. But except for this just flat thing, as we normally have in GS, we also have semantics. So, think of a building, right? Usually at 3D City, we only have like the geography of the building and each attribute, right? But sometimes we wouldn't have more deep hierarchy. So we wanna have the information about the individual roofs or the surfaces that they're waltzed at, and we wanna have both the type that they are also, and also sometimes attributes about that. So let's say one wants to store the sort of potential of a specific roof slope. So for this specific surface, they could attach certain attributes over there. So this is the main characteristics of 3D City models, therefore, is this semantics that a complex data model they have. So 3D City models can be used in certain applications. We can see some examples here. Sorry for interrupting you. I didn't see the slides cut out. I don't know if they are like that or... No, they are not in the... Can you try sharing again? Maybe something's coming out of superior. One moment, sorry for the problems. Let's see now. Okay, let me try to put them. Okay, can you try changing it just to be sure? It's still happening. Yes, I don't know why it's not happening. Can you try maybe in Window mode? I don't know, maybe it might be a full screen. Let's see that. Oh, that's better, right? Okay, now it's working perfectly. Okay, perfect. Awesome, really sorry about that. No problem, sorry. Anyway, so as I was saying, these are the models they have, this complex data model, et cetera. And they can be used in certain applications. So here are some examples. So we can be used for short-operational estimation for visibility analysis, data cadastres, all the things. For instance, in the Netherlands now, they knew environmental legislation really needs to have 3D data because it has a 3D algorithm to compute these things. So you can see some examples there. The last notion about ADC-DEMON is that, I guess it's better that you know, is the notion of LADs. So this is much of the generalization aspect from cartography, I suppose. So depending on the application, you might need different complexity of geometry because we don't always need the most complex geometry for any application. And therefore we have this no station of LAD 0, 1, 2, and 3. And then that was initially a simplistic way and think of the first column that you see over there on the left. But then it was defined to have also the horizontal aspect into consideration. So you can have LAD for instance, 1.2, which means prismatic objects with enough detail on the horizontal aspect. Or you can have LAD 1.0, which basically means just some bonding blocks or some short. So what, why am I talking about ADC-DEMON for everyone? So we've had CDGML for a long time and it's not been an OGC standard for a while. And the problem with CDGML is that there wasn't too much software support. And for us that were developers and we're working with developing software against it. We many times figured out it's really hard to work with CDGML files. And that's not the problem CDGML per se as a data model, because there are two aspect that is data models. So as I said all the theory about how you describe objects and how you subdivide them into individual semantics and phases and stuff. But this was more of the inherited problems that you get from GML encoding. And I think if everyone has ever worked with GML files, they probably understand what I'm talking about. So the idea was like, could we come up with a JSON encoding of the same data model and simplify things? Because with CDGML quite often we would just develop some pieces of software and we were trying to do it for the most part. But there would be some corner case here and then quite often it would be a corner case that would break things. And then we came up with CDGML and the focus of that was to make it easier for developers and for users. So you can go to CDGML.org right now and then you can see everything about CDGML. And the point is of course that we need to have more information. It needs to be more transparent to use it, the user easier for users to approach. And this is what I'm going to talk about today mostly like the ecosystem, the software that you can use and mostly what kind of fun things we can do with CDGML. Now that we have two tools after all these things. They started three years ago by calling a client Google Adoo and everyone started contributing from the group. And now I think people from different groups and disciplines and companies, they are also contributing to the project and that's really nice. So what you can find if you go to CDGML.org, you can find some tutorials for users, how to download data, how to visualize it and stuff. You can just go and read about stuff. And if you are a developer, you can also go there. And I think first-demolition file is quite straightforward but you can also go to that and get the specifics of how to work with CDGML specifically. Of course, there's a lot of CDGML data. Most of it comes from like CDGML data set that they were released and which is converted to CDGML but more and more we see new cities or new organizations just releasing data in the data format. And I think a good example is a project that I presented yesterday and tomorrow, my colleagues Balastuka and Radipetis is going to present as well. It's called 3D Bach and it's 10 million buildings for the Hall of the Netherlands. And it has people choosing three different LBs. So like from simple, charismatic buildings to more complex roof structures. Yesterday we presented the viewer and tomorrow there's going to be a presentation about the data itself and the process of reconstructing them. And it is released among others. It's been released in OBJ for instance, and CO-PARC has been also released in CDGML. And actually there we saw also the power of these complex structures that I said before that we can encode these things in a more natural way, I think. So, but what I want to mostly focus about today is the software. And if you go to Cinejation.org again, there's a menu option for software and then you can see all the pieces of software that support Cinejation. As you probably see, it's pretty much all open source, pretty exception of FME probably. So I'm going to start with what we provide, what kind of software we develop for the users to play with. So what you can do. I think the logo might be on top of the links. It's kind of a pity that I'm going to share the presentation afterwards in the comments anyways. So all the links have been hidden by the logo to see the menu. So the first one is CGAIO. CGAIO is a Python library and a command line in this. So you can just download, you can just install it if you have Python by using PIP. And it's basically the Swiss life of Cinejation. You just use it from the terminal and you can split files, you can exactly load these, you can compress, compress, you can merge files, you can do a whole bunch of things. But this is more terminal for developers who want to do things with Python. So I wouldn't say it's the most user friendly, but it's definitely a very feature rich tool. And it's probably the one where we implement the most things. The second one is a macOS viewer, which is being developed by Ken Arroyo-Hori from our group as well. It's not a Cdjation specific viewer. It supports other formats as well, like Cdj, it used to support Cdjimlet, but I think Cdjimlet's support is dropping now. It's open source as well, you can find it in the app source one and just install it or you can find it in GitHub. And it's quite fast and it gets all the information you want to know. The third one is Ninja, which is a web viewer. It's the official Cdjation viewer. You can just go right now to ninja.cdjation.org. And it's there. So as long as you download the Cdjation file, you can just drag and drop it there and you can immediately start navigating around. And the cool thing is you can see both the free data and the semantics on the left. And you can start understanding the hierarchy there, like the buildings and what part of it should really exist and all this information. And if we have enough time, I might be able to show you a small demo of that question depending on how many questions we have. Another one is QDS plugin that we have developed again. It's called Cdjation Loader. You can just find it in the plugins repository of QDS. If you just open QDS, you've got plugins, you would set what Cdjation is there. And actually, I think this is a very interesting and quite promising. And it has been developed for like two and a half years probably, and you can use that to load all the data. The interesting aspect here is that, as I said before, there's this complex data model and in GIS we have this flood, this relational data model, right? So there's always a problem on how to go from one to another, how are you going to have both buildings and the individual surfaces and stuff. So there's a bunch of options about how you wanna load the data. And then unfortunately, you have to have redundancies or you just have to discard some data. But nevertheless, you can load data and do fun stuff. For instance, you can load 30 back, and let's say, LOD2. So you have semantic surfaces and stuff and then you can split it by surface. So you have the floors, the floor surfaces the wall surfaces and the roof surfaces is individual features. And then you can filter on the floors, for instance, you can merge these things and then you can keep only the footprints just like that with the average of 30 back. That's one example. Later on, I'm gonna give you another hint about some things that were only for QGIS. There's some basic analysis you can do with QGIS. There's also a 3D view in QGIS, it's 3.0, which is cool. So you can use that as well. But yet, not that much that you can do with 3D processing for now, at least. And the last thing that I wanna talk from the piece of stuff that we produce is this blender plugin which is called Hubsidate. This is basically the outcome of the thesis by a master student. And we also help a bit to make it faster and stuff. And works quite well, surprisingly. So you can find it in GitHub, you can download it, you can install it in Blender. If you're an architecture or something, you can just load the 3D city model and maybe load a B model next to it or you can add it to the individual city model and you can actually save back the situation model. So you can make individual changes there. That opens also the possibility of using 3D city models for architecture or even civil engineering or whatever you want. Of course, Blender, QGIS, all of these, they're all open source software as well. So everything is open, you can use that. And on top of that, the plugins themselves. Now a few examples of what others did with citidation. So how the ecosystem seems to be built by others, except for us. The Citigemar 4GA and also Citigemar Tools, which is a tool based on that. It's a Java library and it's built by a virtual city system that comes in Germany. It's open source, you can download to load citidation data and convert them to Citigemar, et cetera. And yeah, you can use Citigemar tools directly from command line to put pretty much what Citigemar is, but for Citigemar to convert to and from Citigemar. This, I became aware of that to be honest this morning. So just not even here, for instance, certain toolkits adding support for Citigemar as far as I'm concerned, this is a tool kit about 30-bit analysis or something like that. But I think it was interesting. It's open source, there is an open source license and there's a dual licensing, but you can purchase something, but still cool that these things exist. This is an extension that has been developed by VirginPST, you can get, I don't remember the name now, that's nice for me, I'm sorry about that. But anyway, so Citigation and Citigemar, they have this concept of extensions and ADEs, respectively. So in Citigation, we call them extensions. So the point is, like, if you are interested in something, let's say energy consumption or something, and you want to standardize that. So let's say, if you are interested into how to model energy consumption per building, then you wanna say that, for energy consumption, I'm gonna have this and this and this and this value and the range of these values. It's supposed to be this for instance. And then this way, you release that as an extension and then other people can use that extension to know that they should follow the same notation and they can check whether they are valid against this notation and this specific extension is about point clouds, how to store point clouds in Citigation itself, for whatever reason, you might wanna do that. Now, if you, there are one way of work, progress, one of the support for a GCP play here, because you don't know, like, WFST, it's always supposed to be more agnostic about the content itself. So one of the notions is that we could possibly use Citigation there. And I think Gation worked well with the web. So, yeah, we're working with some things here and there are some prototypes. There was the student thesis there. But that's still a work in progress there. There's vermin, which is, I'm very excited about that because it's the topic of my PhD, which I'm supposed to finish very soon. And this is basically a Git for Citigation. There's a way of having a, let's say, some kind of a repository where you can have all your versions of city objects and you can look into their lineage and stuff. And it's not always something that is, you know, something in a piece of paper or something. I develop also this prototype where you can just find it on GitHub. You can already play with that. So you can start having versions, branches and stuff. I'm not saying it's 100% robust. Haven't seen it breaking, but we haven't tested it that extensively. And yeah, the last bit I would like to talk about is applications because applications are fun. Like it's fun to have all this stuff. It's fun to have all this data. But most of the times we've seen like, cities organization really, city models were nice visualizations. And everybody's excited and it's nice and it's cool. Or someone just happens to take it somewhere and just do something. But for the most part, it's not really, we don't see them that much utilized. To be honest, so the idea is like, what kind of fun stuff we can do. So this is a piece of research that we started basically a couple of months ago, maybe more than that. But we figured out by this time, PyMesh, which is two Python libraries, Agistace-based and VTK. And they are very easy to load. It's very easy to load city-json data there. Later on here, you're gonna see a repository where you can find some kind of how to do that. And you can do a whole bunch of fun stuff. So for instance, here you can see with PyMesh, we can do certain operations like intersections, differences, symmetric differences, stuff like that. Here you can see how we can use PyMesh to do the operation and PyVista to visualize these things. So the top left, you see the green outline is the original LOD one. The red outline is the original LOD two from the debug for one building. And then you see the intersection of these two. So I think it's cool that we can compute these things now. You can do some proximity analysis. You see the bottom left. You can, at the top right, you see some kind of a clustering of surface by normals, which is cool when you have triangulated surfaces and you wanna group them together. And we use that to compute for instance, search words between buildings, which is really useful for applications. For instance, we know of some insurance company that was interested in computing stats about these things in order to evaluate the risk of a fire or something. Or you can compute grids like you can see the bottom right. And you can compute a whole bunch of stuff. And yeah, we're doing three-year-old analysis based on that. We're computing a whole bunch of stuff like volume, the volume of convex, the volume of the object, the volume of the box, and a whole bunch of metrics that they're supposed to describe the shapes. You can find this at this repository. It's currently a research undergoing and we're going to publish that quite soon. But the repository is already public. So you can just take a look at the code and have fun with it as well. You can run it against any suggestion data set and you get all this line. I think it's about 30 to 40 different metrics and information about data. And we're planning to use that for doing some KDE urban morphology analysis. The other thing, which I think is pretty cool, I started this to be honest personally as a side project this summer for fun. Yes, that's how we have fun this summer, apparently, with Corona. So this is a way to incorporate by this time QGMS, which isn't that easy because it's an issue specifically backwards on how to install it. But the notion is that you could have, for instance, I don't know if you can see, but on the top left, you can see an expression. And there's like two functions there that I was able to quickly add to compute the volume of a geometry or to identify whether it's solid. And you can use that, for instance, for the calculator to add one extra field, or you can use it to do customized conditional formatting or something like that. A bunch of analysis, like for now only a couple, but we're going to easily, all the metrics that I said before, in the processing toolbox, and hopefully soon you will be able to do all these three analysis that I said before from inside QGIS. So thank you. I would be happy to answer any questions. Yeah, and this is my contact details for the most part. Hi. Thank you very much for your talk, Estelios, who was amazing. And yes, we have some time for Q&A and we have some questions. First one is, do you find CTGML difficult to work with because of the SML encoding or do you think the model itself is too compressed? For example, supporting too many education cases? That's a tricky question. The first obstacle that one finds is the general problem, right? The data model, yes, can be sometimes very complex. The problem is that if I want to be honest, I mean, that's happening also to the extent with Citigem, we're trying to figure out and eventually how practitioners want to use these things. So we need to experiment more, we need more people to actually work with these things. And I think eventually all these things they're going to be simplified. So yes, Citigem sometimes can be bloated. I think it has been bloating more and more because we keep adding things theoretically without having people that actually ask for these things and they actually apply these things. But I think it's okay as soon as there is open mind and we can always go forward and adapt. All right, I'm sure. Well, second question is, sorry, are there already base practice approaches in how to compare between Citigem and Citigem? Is that a basis? I'm sorry, I missed the first part of the question. Yes, sorry. Are there very base practice approaches in how to compare between Citigem and Citigem? Best practice approaches to compare between them. Well, for database, I've actually used Citigem and tools to convert from Citigem into Citigem. Five, so that's just fine, right? We've got databases, the databases is another aspect. I didn't touch that, but there's still the CDDB, which is closer to the CDGMM. And this is one problem that we would like to tackle as well, that the CDB is also a bit over complicated, to be honest. And we were also trying to find ways to have a more simplified version for that. We have some master thesis, but we don't have that. And so forth. But if you wanna go from CDGMM to Citigestion, you can just download Citigem and tools. You can look it up. It's from Citigem and for J, the same library. It's Java, so good luck with that. But yeah, it works for the most part. We can convert Citigem into Citigestion. That's what we did as well. And the other way around, of course, if you need. Okay, thank you very much. And I think we have time for more questions. The next one would be, can you explain better how you publish Citigem data as OTC API features? Maybe I should copy that one. Oh, well, Citigestion, I suppose, because Citigem, well. So I wasn't, I haven't worked that much. The point is that with OTC API features, you're supposed to, let's say, you have a collection, which is probably before it was just a whole file. And you can just say, give me the first 10 ones or give me the objects that they have in specific categories, right? So the point is, I wanna be able to just ask for 10 buildings that they have these features or just like as many buildings that they have the feature and paginate these things. And because this is a JSON snippet, we're supposed to be able to shorten that. So instead of giving you the whole file, we can just select these 10 files, and quickly a Citigestion file and share that. There are some technical issues with the fact that Citigestion tries to compress some things and we need to figure this out. But for the most part, this is undertaken for now. We still need to see how this goes with clients and stuff. I hope that was a good explanation, Matt. If you have any more complex questions, I would be happy to discuss this over again or something. Sure, we have a short time for one last question. What are the use cases of Citigestion compared to Citigetides? Oh, well, that's a good one actually, because I was supposed to talk about Citigetides, I didn't find the time to do so. Eventually, Citigestides suffers from the same issue that TLS data suffers. It's super fast. So that's a good thing, but it's flat. And that's where the Citigestion models don't fit, right? So for the debuff that I said, we work on the viewer and we have the data, Citigestion data, but we want it to release them fast. If you want to do that, you eventually have to go with something like three times. But then you have to decide. It's the same as when you import Citigestion data to huge areas. You have to decide that you are the discard data. For instance, let's say I only care about the building as a whole, and I don't care about the semantic, the individual semantic surfaces, therefore I discard this. Or I care about semantic surfaces, which means I need to maybe copy all the attributes of the building for every surface individually, right? So eventually you either have redundancy or you lose some data. But 3DTiles has its own, I think, useful aspect that it can be used for disseminating fast and in the web. And Citigestion is probably not the most appropriate way to do so. And it's a form of to download static data. Okay, I see. Thank you very much. And I think that's time. So thank you very much for your talk, Estelios, and for being here in FosforG. Thank you very much. Good night, Buenos noches. Good night. And next, well, we will be seeing you in FosforG.
|
This is a presentation about CityJSON and the variety of open source tools that are available around it. CityJSON is a file format that can be used to exchange 3D city models. The talk will focus mainly on ninja, the QGIS plugin and the Blender plugin and some details about their developments and our plans for them will be shown. A brief overview of all software options will be presented and the suitability to different applications will be investigated. Also, certain other state-of-the-art developments regarding CityJSON will be mentioned (e.g. dissemination through OGC API Features, serving data through 3D Tiles and versioning of CityJSON models).
|
10.5446/57435 (DOI)
|
I'm going to share with you some insights of this project to the Olympia Valdivia on behalf of the whole team behind it. So I will start with a brief overview of the project then I will describe some of the design and implementation details and I will end discussing the current status and future of work. So what is the Olympia Valdivia project? First, this is a work in progress. We should have been presenting the results of the first phase at this time but we have been affected by several crises, first the social unrest in Chile and then the worldwide pandemic crisis. So let's start by introducing the team. So we are four people, led by Medigracia Valinas. She conceived the main idea of the project and she also takes care of all the GIS and special planning aspects. Our main developer is Jose Gatica. He also entertains us with his music performances. It used to be live but now because of the pandemic it's only by streaming. Camila Lagomarsino, she's in charge of the topics really related to environmental education and recycling and I try to keep the design consistent and at the same time I try to look for the best ways to process the data, hopefully adding some components of computational intelligence. Okay, so why are we developing this tool? This sort of short story is to be patient with us. We will get to the tool right away. So since the tool is closely related to the city of Valdivia and its people, I will give you some little background information about it. So for those who doesn't know the city, it's a very beautiful small city in the south of Chile. It's almost at the same latitude as San Martín de Los Andes, a few hours away if you don't consider the time spent at the border. Of course it's also close to Valdivia. It has an important shipyard industry, agriculture, forestry and aquaculture activities. It is also an important touristic destination in Chile. It has a lively cultural life. There are important music and film festivals which held out every year in the city and also one of the major universities in Chile is based in Valdivia. On the other hand, I must say that people who live in Valdivia are very conscious and very mentally speaking. So how did this start? Back then around 2009, Mary Grace noticed that when the waste collection system in service in the city was disrupted, at that time it was because of a workers' strike. The garbage accumulates on the street and people in general seem to not care. Everyone, including myself, was we were just waiting for the service to resume operation because we are used to that, to the service working. But it was a bit annoying that the garbage accumulates on the street and it happens a couple of times afterwards. So in 2018, Valdivia was selected as one of the Chilean cities for the Willow Cities contest and in that occasion Mary Grace submitted a very rough idea of something to improve the waste collection and avoid the problems when the system is disrupted. Almost at the same time in the city, a consortium was formed to promote Valdivia to become a smart city. This consortium was formed by the Universidad, so the Universidad de Chile, and the Faculty of Engineering where I work. The municipality of Valdivia, a local development NGO, and a couple of local companies. And the municipality seemed to have liked the idea, submitted to the Willow Cities contest and included it in a list of funding opportunities to develop proof-of-concept small projects related to smart cities. So then Mary Grace took the chance to submit a proper project to the proposal to develop a tool focusing on the disruption and certainties of the service and also related to something to tackle a problem we have in the city which is the illegal dumping microsites. So places where people go and throw garbage without permission. Of course it's in the middle of the city. Okay, so the solution is all about connecting people and let the information flow. So basically we will have the users over here which can access the system through web or Android applications to this information system. They will be able to see the location of the waste collection tracks in real time in a dynamic map. They will be able to select a point in the map and the closest collection route will be shown and the estimated time of arrival to position later in the route will be calculated. The users will be able to access educational information related to the environment and recycling also how to manage household waste in a better way and the users will be able to submit the reports of illegal dumping microsites when they find one with photos and geolocation through a form in the applications. On the other hand the municipality can notify the users about changes or last-minute updates on the service and for future versions we would like to add more features such as waste segregation which is not currently implemented in the service in the city and the smart waste containers but we need to hard work for that to work. At the beginning the project was born from the perspective of the users and the idea was a bit fussy probably very idealistic and most probably not feasible to carry out with the specified resources of people and time. Through this consortium, this is a Smart City Valdivia consortium, we had access to the people in the municipality who actually run the service and we managed to meet with them before the pandemic it was in person as you can see here but later we had to move to online meetings only but we had very good flow information with them and they provided very valuable and fundamental feedback and that actually helped us to make our project achievable and feasible. They also put us in contact with the company which provides the satellite positioning of their tracks and they granted full access I will talk a little bit more about that later. Together with the municipality also we selected a neighborhood in the city which didn't have many complexities in terms of the service compared to other neighborhoods so this particular place is ideal to test the system without any further perturbances outside our own mistakes in the development. So as soon as we had this neighborhood selected we designed and then conducted a survey to establish a baseline in terms of the knowledge of the user with respect to the service also the interest in the proposed solution. Most people only knew the very basics about the collection service basically the day and the time the day only that the track will pass through their house and almost nothing else and they were all very interested in testing the application and also had the resources to do that basically we would need internet access and some device to run the application either a web browser or the mobile app. Okay let's move to the implementation details a little bit on that. Okay so we have two basic ways to access this the system one is through a web interface the other to mobile application Android so the web interface was developed using bootstrap CSS framework in order to build a responsive mobile friendly web front end. Originally we were writing the logic in JavaScript but now we moved to Python coding using the Brighton interpreter it's a very nice solution to run Python code in the browser. The dynamic map showing the tracks is rendered using leaflet control directly from the Python code so the track positions which are provided by the third party are collected by a component built using FIWARE and then republished into the the clients our clients. FIWARE for those who don't know it is an open-source platform for a smart anything smart agriculture smart industry smart cities smart houses and they provide very well-defined interoperational interfaces actually we could have developed the whole application using FIWARE but we are just starting to to learn about it so only one component is done with this framework. The form to report the illegal dumping microsites is handled by the very well-known form mail script in this case the PHP version by Tektite and here on highlight this is a test version of the dynamic map running with offline data not the real-time data and somewhere there I don't know if you can see this a bit small you can see the track and the blue line is the part of the route which the track has already traversed and the red line correspond to the remaining of the route and we have been doing some distance calculations also. The mobile app on the other hand has originally was developed using Ionic and Angular frameworks together you can see here the main screens of the current version of the mobile app however the the workflow with the Ionic and Angular has not been friendly enough for us for other people works but for us it hasn't so we are now moving to Python coding also for the app using Kivi the Kivi framework which allows cross-platform development so we can develop on on see the results on the desktop and then move to the to the to the mobile version and see almost the same. So these are the components from that interact with the users so from the point of view of the municipality which is the service provider the the municipality has to provide two main data streams so one is the service status updates so if there is any disruption changing schedules etc and we wanted we didn't want to add a new tool for the municipality to learn and we we decided to to build something and obstructive for them so they were already using Instagram and Twitter for communicating something so we developed a Python script running in our server which uses the Twitter official API to catch the posts from the municipality user with a specific hashtag designated to work with with us and will collect this post and then push it to the to the users using either the web notification API or the Android notification API depending on the case and we are testing a couple of open-source solutions to do that right now I will talk about that later also so the the other data that gets into the system is the real-time positioning of the tracks the tracks all have positioning through the GNSS network and probably they send the information to the mobile network to the proprietary servers and then they are published using a proprietary API which we had access thanks to the municipality which luckily had a contract that forced the provider to grant access for custom developments and the tech staff from the company has been very helpful in guiding us to access their data I must say that there unfortunately their API is not well documented and it's not self-discoverable so it's a rest API but it's not self-discoverable so we need their help to in order to develop the solution actually currently in the pandemic we haven't had much communication and they made some upgrades and we haven't upgraded our system so currently the the communication is broken but as soon as we establish contact with them with the company we we hope to bring back the the bridge between their their API and our programs and we also have access and this is fully functional we have access to the historical data of the tracks roots and we will use this data this historical data since the the data installed the GPS system on the tracks to train a machine learning algorithm to do the estimation of the time of arrival of a future position given the the actual position of the track and in real time so let's talk about a little bit about this because it's giving us some problems and I want we want to share it because maybe you can give us suggestions on how to solve it so let me highlight some here so you if you had tried this before so you we have the the points the GPS points so the satellite positions of the track on a particular day so we have all the points where the track was passing by with a one minute interval and a one minute interval is too long even for a garbage collection track many things can happen in one minute so if you can notice here the the white line with with blue borders if we just connect the points it seems like the track has been passing through buildings and houses and that's not right so connecting the points will not give the actual route the track traversed so we need to do something else we couldn't find a ready-made solution for that so we we tried something ourselves and we we managed to obtain this from the from the satellite position these red lines which nicely fit to the actual the streets were obtained you may be asking how did we got them so we used routing algorithm but not in the traditional way so we we use it in slightly different way in this case we use the pirate leave three routing algorithm by Oliver Wyden Mikolaj Kuranovsky it's based on the A star shorted path algorithm we use the car mode so take in consideration the direction of the traffic flow and it is based on open-street map data that's one of the reasons why we used it and how did we we use the algorithm well basically we for for every pair of consecutive positions of the track which hopefully are close enough we run the routing algorithm to find the street route which would connect those two points positions we then gather all those segments join them into a multi-polar and geospatial object and then we save the complete route and and we can use it to train our machine learning algorithm or to do some other analysis however this is not yet working all right we need to improve it it's a I think it's a good starting point but let me show you what's not working one of the things that's not working there there's several and turn right this way however there are only three points in this part so one point here and in this intersection another point midway and then another point on the intersection when it turns right so for some reason and apparently it's some kind of resolution problem the middle point was not detected on the correct side of the avenue but was detected on the other side and because of the traffic flow the routing algorithm decided that the optimal route to get there is this loop around this this park so which which would be correct but it's not there if you want to get there that would be the correct route but it's not what the track did the track just went straight away so this this kind of situations we have to improve we don't have the solution yet so any suggestions are welcome and when we have that we can build the the the last component that we are missing which is the prediction of the time of arrival of the track let's start wrapping up so what do we have we have the data set of tracks since October 2018 it's pretty good amount of data to work with we have pre-processed it until August 2019 pre-processing means basically normal normalization of the data because the the provider in between changed the data format actually they change the units of the speed distance they change they have changed the date strings the timestamps so we have to make everything homogeneous to be then automatically processed and we have our web client mostly up and running all the static parts also the report form is working we also have the first batch of the environmental education material like this picture on the right showing the all the concepts of recycling reusing etc and this kind of information has already been released some of it through social media channels but as soon as we have the alpha version of our system we will use the in-app notification also to make some kind of educational campaigns several components is still need some work so however most of them have more than 50% of the implementation done so we should be soon having everything ready to the first trials so as I told you we are trying to find out which tool to use for notifications we have to in our site notify we have made some tests with it and air notifier as I told you we are migrating the mobile source base to Python using kivi in particular kivi MD material design since we are using now material design in the app we want to clean up the style of the website and try to match it to the to the MD look of the app and we have to improve the security in the forms they were in the testing phase and we didn't have much security and we receive a lot of spam through the forms and that that cannot happen when it goes official so we need to enable all all the security measures in the forms we need to improve the route ground truth algorithm as I told you I would like to thank Vicky Vergara because yesterday after her talk about PG routing we talk up a little bit about this and she made a very good suggestions I think there are some ideas we can try and actually we are going to try next the PG routing algorithm and friends to see if we can use it to improve this part that will allow us to to start building the route prediction algorithm which will use a machine learning algorithm not yet determined but I think we will start testing artificial neural networks to do the prediction and maybe some some other more classic stochastic alternative also and when we have the version we don't need the route prediction to to together our alpha version and we will carry out some live dry run tests in the neighborhood and at the same time we would like to do a workshop with other providers beside the municipality usually small entrepreneurs small businesses that collect specific kind of garbage like paper bottles electronic components maybe they would be interested in at least providing their contact information so users can find them easily through the the same web app or mobile app just some final things I went to emphasize the feedback from the stakeholders was essential to shape the solution in a more optimal way and it happens a lot that even if you are a user and you're developing a solution as a user you are not the only user so if you keep your own only your ideas you may be lacking part of the general view so it's very important to consider everyone who is going to use your solution and this case was the municipality and the neighbors so that's very very important of course it's trying to solve or trying to develop a solution like this with such compact team as we have it wouldn't be possible without the huge amount of ready-to-use professional quality free and open source projects available and of course thanks to the vibrant community behind those projects to help using them and keep developing them in our particular project and in many places also we have a data processing component and this is taking a lot of effort and usually you have to pay attention to that if you want to build a robust solutions just some final acknowledgments thanks to the University of Australia Chile and the Innoving 2030 Project for the financial support the municipality of Alvivia which has been great in collaborating with us they have given their time to provide feedback they gave us access to the data set we are very happy with their commitment and Augustina Suá also helped us with the survey activities design and conducting the activity and of course we would like to thank the students who were on the field taking the answers of the survey these are the credits and licenses of all the materials found in the document I hope I didn't forget anything it is unintentionally if it is so and that's it thank you very much again for your attention and if you have comments or questions I think there will be some time now thank you very much Daniel that was very comprehensive and it's a very interesting solution that you developed there I'm looking at our annuals for questions I think the presentation has some sort of delay between the two so maybe maybe the questions will come in about 30 seconds or so in the meantime I'll just tell everybody that you can contact Daniel probably on this email if you have any any more questions and I'll just if there's nothing coming over or it's gonna just take five minutes four minutes now of a short break so we can keep this schedule running in the same parameters anybody who wants to tune in should tune in at the right time of the presentation so I don't see any questions coming but I'm sure that probably a lot of people will contact you in private yeah sure they can contact me through either venue list I will be around a little bit and by email and I would like to thank and Andreas also I think it's another presenter later because he's suggesting also some solution for the routing problem we have thank you Andreas I'm sure you can have many many good discussions about this and I'm also happy that Vicky contributed and I'm looking forward to see the next page of this information yeah yeah we I guess we will be showing further progress in following conferences there seem to be no other questions from the audience but thank you very much Daniel and looking forward to talk to you more in the back of the thank you have a nice day thank you bye bye bye bye
|
Ciudad Limpia Valdivia is a Web and Mobile application which attempts to tackle some of the aforementioned issues for the Municipality of Valdivia in Chile. It is based on Free and Open Source Software and it is currently in the last stage of prototyping. Its development has incorporated geospatial analyses as well as some tools from computational intelligence and machine learning. In particular, approximately one year of the municipality's waste collection routes captured by Satellite Navigation Systems have been used to study the system’s behaviour for a target neighborhood. Also, surveys were conducted in that neighborhood to evaluate the interest in the application and to assess its potential impact. The waste collection routes dataset will now be used to train a system capable of estimating the arrival of the corresponding waste collection truck to a particular point selected by the user, using machine learning techniques. The “real-time” position of the trucks is shown in the application, thus giving feedback to the user of the current status of the service. Another important feature of the application is the illegal dumping reporting that will be received by the municipality. Ciudad Limpia Valdivia's goal is to be a bridge of communication between the citizens and the municipality. This paper will present the general concept design of the whole application, the survey’s results, the current development status and preliminary analyses of the routes dataset. It will also discuss the next stages in the project as well as future work after the prototyping phase is finished. Additionally, the discussion will address the importance of Free and Open Source Software in this kind of application with a social impact in the community.
|
10.5446/57436 (DOI)
|
Welcome everyone to Phosphor G Day 5. I'm glad you're here with us today in Yuma Waka Room. I will be your session host. My name is Arnalie. We have six exciting talks lined up for you today. Kicking it off is Rainer Tasiko. Rainer Tasiko is a climate change reality member from the Philippines. He currently works on health information systems of the Philippines. Here with us now he will talk about climate adaptation from the state of birthing facility accessibility for women in the Philippines. Rainer, get it on. Okay, so thank you Arnalie and LOM Run and pleasant day to all of you. So now, okay, so let me share my screen. So there, so this is a talk regarding the climate adaptation from the state of birthing facility accessibility for women in the Philippines in the year 2020. So, okay, so, okay, for the Phosphor G 2021. So right now, so, okay, so be the topic of time to the end my portfolio is data where data can see lives. So for today's agenda, so, so right now, introduction and check in and what are natural hazards and pillars of health care system. Importantly, OSM here and building houses, clinics and etc. data methods discussion. So by the way, I use LibreOffice here for my presentation. Right now, so we're now moving or hearing what are natural hazards. So basically, natural hazards, maybe from the third group. So, by the way, this came from the Health EDRM article. So, and you physical hydrological, meteorological, chemical, biological, and even star-terrestrial all do that here. So, by the way, Philippines, Philippines is located at the Taipun belt and the Pacific Ring of Fire. So an excerpt from Bergeri Dailand, Rokkoma's Tropical Cyclone Density and Trax from 1884 and to until 2018. Although that till now, Philippines still experiencing the changes from the climate change. So, okay, so what are these natural hazards that we are currently experiencing? So, with the help of our government and other concerned and government organizations and agencies. So, years ago, there is the Taipun Hyen and recently, Mulabbe and Goni, although that I much more became involved on Mulabbe and Goni regarding the MAPI initiative with the the humanitarian opposite map. So, and thankfully, I feel that I've grown well or professionally and personally and personally with our community. So, here we have Mika Temura. So, he talked me as well from the map mix regarding the usage of the humanitarian opposite map and other people as well from the opposite map, Philippines community and the usage of opposite map, aside from the humanitarian opposite map as well. With this, we've also mapped the and map areas of the Philippines and right now we are contributing much more on the usage of the opposite map. So, okay, so if any one of you are familiar with the open sweet map, so if you haven't or you haven't just it. So, right now, let me share with you screen to guide you on the humanitarian opposite map. So, okay. So, here, okay. So, and log in and use your account. So, here we can edit or contribute on what we know record locally on the information that we can contribute in our community, especially on there are a lot of users who are currently using this for their research and other activities as well, especially for planning and so on, which is much more much more updated in the context of our country or here in the Philippines rather. So, here. So, why we can continue on making buildings houses roads on the nodes or point of interest or our point of interest here. So, by the way, I am here now in Cupid lost banners and and living near the. So, anyway, close this now and want to share what we have from. We have worked on. Okay, so. So, okay. Okay, so. So, using Google streets so we have a crowdsourced or process this with the help of our. Some of our experts in the city of open sweet map. And the office with my Philippines community, especially from a day and money I know the at the day, especially here in processing data as well and getting us on the usage of the. A piece with my and making a tags here so basically we we have processed a lot of data from the doh and you are so I also plan on. A lot of changes here so especially on making the coordinates coordinates of these breaking facilities which we have already processed most of it. So here. Maybe the provinces in the Philippines from ncr region one. And the region three for a four a b five six seven eight nine eleven twelve and bansum or Muslim in the now so although that this one is much more on on the focus on breaking homes although that we haven't added the data from the hospitals of the utilization in the healthcare system that. With an accreditation level up to entry so yeah by the way I I miss something that I should had shared earlier so by the way this is the six pillars of the healthcare. Forgot to mention regarding as part of the importance here I talking and talk about so here six pillars of healthcare systems that leadership and governance financing. Health information systems medicine technology and health workers and service delivery so by the way I'm already talking about the leadership governance service delivery and health information systems medicine and technology so. Here I'm talking about the logistics and the healthcare service delivery. Availability of success in the gas and breaking homes all that there are basic of statistics and on call clicks and and much much more on the comprehensive one which. And keep there's much more of this service for the people so. Right now so using this data so I made one for it out sorry this one this from a project so I. Okay so so I made this one from the map with map so yeah although that I haven't. Included this in the or loaded this on a website use the apps and so on so I only use this for for prototype things and moreover I. As I've said this information or details that here that is here right now that facility name. Ownership regions province city municipality barangays so we right now we have a barangas so basically in previous years. In the. Uses county and so on so in the address here number number email address website that that the good long good and so we have real health here for the insurance system so yeah okay so. And yes they such application of the. So part of the visualization of the process data. So. Here we can see the availability of services so across. The Philippines so this is a sample head so. And then further will be. The visualization for this so here I can counted the. As part of my personal projects so here. So I use data wrap. So I use data wrapper here and. And the values of how many breaking facilities available. And that on that area so in that municipalities are under that region so. With that that represents the color if you visualize it so so you are now wondering on how did I. So we can use we can count it here and. And they sit here regarding their municipalities are cities are and further process if if possible in. And the other the liberal office. So with that and then we can now visualize. The data that we have process so. So as you can see we can. Change the. The color here and isn't the values. And the idea so. So I have uploaded it here so. So basically there are a lot of. Premises at this so. What is this okay so moving forward then okay so then we can visualize it. So up upon uploading the data rather so here they can visualize the data then publish it later all that you want I have. Finish last 2020. And hoping to integrate it for further more in the gender country profile that we have right now in the building so. So this one is the finished product that I've made. Last. The first quarter of 2021 so this one. Okay. So. There for the particular citizen using the day which 2020 data which I haven't really. Refined it further since the ability of data is scar so there is a steep steep progress here. And with that I am finishing my talk may talk and thank you so I will stop sharing now. Yes, thank you Rainier. Thank you for having a good time management too. So we have one question from the audiences. So the question is how many birthing facilities has been mapped as of now and what data gaps do you observe after gaining and mapping the data. Okay, so there are two how many. The audiences have been mapped already mapped already 100 so although that I haven't published it publicly for public use so we hope that we can refine it further with the help of our community partners and people in the. You can. So the gaps that I have observed are gaining the Dini and mapping the data so okay so the gaps that I observed so okay so the data okay the accuracy and then the availability of data if we can. So the said facilities is still there in fact or we can say that the this is still accredited or accredited by the Department of Health or right now in the previous we use the Department of Health that is comparable to other countries that use the Ministry of Health so. There so yeah the gaps is the gap is that the. The availability of data so that's all. Thank you thank you thank you for the questions for the. Yeah, thank you for the answer yeah we don't have any more questions from the audience so but yeah do you have any closing remarks or closing message to the audience since we have some more time. I call seeing remarks here so so I would like to think mommy I'm from the friend and I'm Philippines and the. Judy this project tool that's all for inspiring me. You can bring here if you can put your email address on the chat so some audiences can also reach out to you so yeah we'll have seven minute break and we'll be back at 830 see you.
|
As the rapid changes in the landscape of the coastal communities in the Philippine archipelago are undeniably felt brought by the strong typhoons from the Pacific ocean due to the rising global temperature or climate change. This study aims to visualize the accessibility of birthing facilities in the Philippines using the Department of Health’s (DOH) 2020 data that may be used on existing and open frameworks for climate adaptation and disaster adaptation considering that the Philippines ranks 5th among ASEAN nations based on a smart city analysis by the Innovation Cities program. This can be attributed to weak data infrastructure for centralized health systems and the lack of open data and access to information brought about by policy gaps in the National Freedom of Information act on disclosing public information that eventually affected the economy. Furthermore, this study aims to organize activities in various communities in the country on improving city planning and operations can make the Geographical Isolated and Disadvantaged Areas (GIDA) more accessible to government services and make the local government responsive to emerging needs of the population using the Sendai framework where the needs of improvements in maternal mortality in the country have largely stagnated ever since the early 90s.
|
10.5446/57437 (DOI)
|
Hello, Rab. Hello. Yes. Hello. Yeah. So we'll get started with the next talk. I'd like to introduce Sam Svaral-Jewarna. Apologies if I passed that correctly. He received a PhD in computer science and engineering from University of Bologna, Italy in 2020. He's now a postdoctoral researcher at the University of Bologna. His research interests cover many aspects of big data management and data science for highly dynamic applications in smart cities and urban informatics. He's also an expert in Azure cloud and edge computing. So thank you. And take it away. I'll add your slides to the screen. Great. Yes. Thank you, Rob. Hello, everybody. I'm Sam Svaral-Jewarna. I was a doctoral researcher at the University of Bologna, Department of Computer Science in Italy. Thanks a lot for attending my presentation. And thanks a lot to Microsoft for inviting me to this online edition of Free and Open Source Software for Juicspecial Conference. It's an honor for me to speak here today. So I'm today presenting a cloud-based Juicspecial Open Systems for mitigating climate change, research directions, challenges, and future perspectives. At the end of the presentation, and this is really very important, I will provide you with a link to my open source code that you can easily deploy on Microsoft Azure. OK. So this is a short outline of my presentation today. I will start basically by an overview and motivating scenario, explaining why the joint analysis and the processing of georeferenced mobility, climate, and meteorological data together is pivotal in today's smart city scenarios. I will explain some of the challenges that are hindering the application of this joint analysis using the current open source cloud-based systems for such a multi-domain scenario. After a while, I will show a novel architecture that we are working in here at University of Bologna and deployed it on Microsoft Azure by tweaking some of the open source Juicspecial systems that are available currently. Then I will conclude the talk with an example, future Boboist architecture that features edge computing, specifically on Azure actually, in addition to cloud computing. So imagine being a daily city dweller walking at the different areas of a metropolitan city that is heavily trafficked by several kinds of vehicles that simply brings more pollution to the air. Pollutions like particulate matters, CO2, imagines from those cars because of those imagines, including the dangerous particulate matters, BM10 and BM2.5, which could easily wreak havoc on the health of any human being. So the city management and planning departments want solutions that allows them to study the relationship between vehicle mobility in a city, in the metropolitan city, and climate and meteorological data. Actually, there are many ways for collecting climate and meteorological data, including on-site sensors or inside sensors and moving sensors. This data normally is georeferenced, containing the coordinates of locations where those data has been collected, the locations they are representing. For example, a value of BM10 in a specific location. At the same time, massive amounts of human and vehicle mobility data are arriving in faster streams to special stream processing systems, as I will come to explain later in the next slides. So the task is to be able to join mobility, climatology, and meteorological data to exploit a shared view that may help in better urban planning that aims to mitigate climate change. In a name, the final aim is to protect the population health from the long run, protect them from images like particulate matters. So we want to do some interactive visualization like heat maps showing areas that are most polluted in real time with the statistics of mobility. So it's a joint analysis between climate data and meteorological data and mobility data. So we want to be able to build mobile applications that help daily lightweighted dwellers who are using their bikes, motorcycles, or just simply walking. We want to help them in avoiding street paths or street pathways with heavy pollutions. So at least in this scenario, we have some special queries that we need to perform. Mostly we do special broximity, special joins, special clustering, special statistics, or what is so-called geostatistics. And the ultimate goal is achieving a specific list of quality of service goals. So we want a system that is able to achieve low latency at the same time high throughput. So this is simply an overview of what we want it to be in order to bring the joint meteorological and mobility and also climate data analytics into reality. We want simply to join, as you can see in the figure, georeferenced meteorological and climate data with mobility data, vehicle mobility data, and human mobility data. Using geometric coordinates. So we want to use the coordinates from both datasets to join the data into a defined view. Imagine then being able to answer advanced query, complex queries, special queries such as selecting, for example, the top five neighborhoods where the particular matter is greater than a specific value, and also the mobility statistics, like the count of vehicle mobility in those areas is greater than a specific value. Without joining the meteorological and mobility data, this kind of queries is impossible. A little background, actually, we need here. As we said, we need some to execute or to perform some of these Jewish special queries such as proximity query and containment query. Proximity query is a query that is asking for a set of data that far away from a point of interest with a specific range. This one actually can be solved with a test or a predicate in Jewish special analytics that is known as point and polygon. I will show how proximity actually can be solved with point and polygon. But specifically the containment or inclusion query need a point and polygon to be solved. And containment or inclusion queries actually ask for all points that are contained within the premises of a polygonal shape, you know, geometrical shape, regularly or arbitrarily shaped. Okay. So normally they use the broadcasting algorithm for solving this kind of queries that the IB, which is required for containment queries. But broadcasting is expensive. Why? Because for every point that you want to check, you have to pass horizontally a row and to check how many times the row crosses the polygon. If it's even then it's outside, if it's odd then it's inside. It's inside. Imagine doing this for a very big mobility data or meteorological data. This is very expensive. So, as I said, for sure containment or inclusion queries can only be solved by the IB or point and polygon. But also proximity queries can be solved from being able to solve them using the distance calculations, geometrical distance calculations, like using the Havars-Sainte formula, if you are aware of Havars-Sainte formula. It's just a simple formula for calculating the distance in geometries, you know. But apart from that, we can also solve the proximity queries using BIB. And this is a simple algorithm actually for solving the proximity queries using BIB. You can see that even this problem can be transformed into a containment-alike query. First, with the range specified, we overlay a circle, or circles, if we have loops or holes, as you can see here, with an overlay minimum bounding rectangles that includes the circle. Thereafter, we apply BIB predicate to find which points really fall within those minimum bounding rectangles. For sure we'll have false positives because you see the rectangle is not a circle. So, it's just enclosing the circle. So, to actually discard those false positives, we apply the formula like Havars-Sainte formula, which is expensive. But at least here, we have performed a filter before we apply the Havars-Sainte formula. So, we were able to lower the latency and reduce the latency. Okay, so what we need here is a robust and strong cloud-based geospatial system that is able to run in parallel and deployments using the cloud, such as Microsoft Azure, HDN site, actually. So, just a second. Okay, this is a partial landscape of some of the current cloud-based promising open source geospatial processing systems. From here, actually, I will be selecting Cidona, which is previously known as Geospark, and also I will choose Geomesa. Geomesa actually is a very interesting framework as an open source cloud-based geospatial system because it is featuring filter and refinement. I will explain what is filter and refinement here for special join because special join is very essential operation in geospatial analytics in the cloud. So, the desired features we need for our scenario is at least a system that is featuring filter and refinement. Why? Because it is cheaper than other special join methods because our task is joining two datasets that are georeferenced. Then, also, we need SQLalike support. As you can see, Geomesa has SQLalike support. Also, there is another interesting open source platform, which is Spark Magillan. It is a small library, actually. I have been using it for years. Also, we need a system that is supporting approximate spatial query processing in the cloud. Spatial approximate query processing is like spatial sampling. I will come to that in the next slides. Okay. But what are some of the challenges that we have in the scenario that we have described here, which is the joint processing of climatology and mobility data? Actually, most importantly is how we transfer the geospatial data, any georeference data, because I am here referring actually to vector geospatial data. Because they are barometrized into pairs of longitudes and latitudes before being able to freely move throughout the network. And why we do this, actually? Because we do this simply to reduce the transfer time and the upload time to the cloud. But this does not come for free, actually, because real geometries in this way are lost and it will be very hard and expensive to construct those into the real geometries. Why? Because we will be needing actually the point and test. So imagine you have points like this. This is the geospatial data that is moving around and then it works. It means nothing, actually. We have longitudes and latitudes. We have the embedding area here, you know. But the task is we want to check to which polygon each point here in this parameterized set, in this parameterized matrix, longitudes, latitudes, to which polygon they belong. And this is normally known as overlying maps. You know, you put a map over another map or a matrix or an array over a map, you know, the embedding area. But doing that is not straightforward as shown here in pictures, you know. It requires a special join. A special join is specifically very expensive because it is incorporated, as we said before, the BIB, the point and polygon predicate, which is an expensive actually geometric operation. So what some of the open source, successfully, some of the open source cloud-based geospatial systems, opt for is adopting the filter and refiner approach for spatial processing. And it simply works, as shown here in the figure, such as the following. First, computing the MBR for every point from the data points. Then computing the MBR for the embedding areas, the Italian map that you have seen before. Then performing an eco-join, which is cheap, actually, to find which points fall within the embedding area. And this is called the filtering stage. But after the filtering stage, actually, you will have false positives because here you are comparing, you are performing basically what is called as MBR join. So you are joining on the minimum boundary rectangles. You will have false positives. So then you will use the expensive raycasting algorithm to exclude those false positives. And this is the refinement stage. I will not go into more details of this algorithm, actually. You can find in the literature that I will be providing at the end of the presentation. Very interesting information about the filter on refine. What we have done at the University of Bologna, many architectures for enabling such kind of scenario, the joint processing of meteorological climate and geospatial mobility data. OK. So, sorry. This is. So, since few open source systems apply the filter on refinement approach, like JUMIs and sparks, magilla, and et cetera, it is then possible with a symbol code tweak to adapt, actually, those systems so that we can join georeference, mobility, and climate, and meteorological data. As it's shown here in the figure, and this is a proposed architecture that we have proposed at University of Bologna, we simply generate minimum boundary rectangles for each dataset independently, the geometrical, sorry, the mobility data and meteorological data, and also the climate change data. Then you generate for each dataset the minimum boundary rectangles, and then you can directly apply any open source system that is adopting the filter on refinement approach, such as JUMIs or sparks, magilla, or even Sedona. The result would be here, a unified view of mobility and meteorological data. But without having a system that is providing a very fast joint operation on georeference data, it would be very expensive. Okay, but actually this alone is not enough. What if mobility data is arriving very fast and characterized by being temporarily fluctuating against QUNIS and an arrival rate, then approximate processing is very important and is a must, and it's based on the fact that tiny losses in accuracy do not affect the correctness of decision making. That's it. Approximate results with rigorous error bounds are acceptable for decision making in scenarios like our scenario here, the joint processing of meteorological and mobility data. Imagine generating, for example, heat maps where the color codes really are not the things that matters most, not the exact distribution of data. You just want to generate heat maps with different color codes. Okay. So we have built a very interesting framework for special approximate processing by simply tweaking some of the open source systems, specifically for the Spark Smagilin, which is a library on top of Spark. And we have actually applied it to a batch of Spark structured streaming for performing quality of service or special sampling, specifically. And we have also built a special standard for streaming for performing quality of service or special sampling, specifically. And we have tested it and deployed it on Microsoft Azure. And the literature review that I will be providing at the end of this presentation, you can find more information about this interesting framework. So in conclusion, open source due special cloud based systems are still in their early stages. And need to be significantly improved in my opinion in several directions, including the approximate query processing, due special approximate processing in order to unleash the full spectrum capacity for joint analytics, for joint sorry, analytics of mobility, climate and meteorological data. Why I'm saying this because even the while we're using the filter and refinement approach, you still have to tweak those systems to be able to apply them to such scenarios. Okay, so as a future research perspective, we believe that offloading and boarding part of the work, for example, the sampling for approximate query processing, near the data, by near the data I mean to edge devices can both significantly the performance of cloud deployments for integrated processing of georeference, mobility and meteorological data. And here is a very interesting actually framework I'm proposing. Look at this example. For example, we have two samplers and this is deployed in Microsoft Azure actually to samplers hosted in containers deployed an Azure I2U edge, IOT edge, sorry, mobility sampler and meteorological sampler, each sampling portion of the data sending it to Kafka, Kafka brokers, you know, deployed in Azure, which then utilizes an IOT, Kafka hub connector to feed the data from the Kafka to an Azure I2U IOT hub. The data is then available for consumption by a Spark streaming cluster deployed on an Azure as the inside cluster. So by boarding the sampling part to Azure edge, we reduce the data upload and reduce the processing required by the cloud. Okay, so in this link, you can find actually my code, which is an open source code for cloud-based special approximate processing with instructions, very clear instructions to deploy in Microsoft Azure. Actually, it's deployed on a Kafka HDI inside the cluster with Spark HDI inside the cluster communicating directly actually in a virtual network where data streams arriving through a gateway will be actually forwarded to the network. Finally, I would like to say that we are open for collaboration, our group, Mobile Moodleware Research Group at University of Polonia is open for collaboration as a research collaboration on any of the areas that I have presented today. So please feel free to contact me, Isam Al-Jawarne. Also, you can contact Prof. Luca Foschini from our group. I advise you actually if you are interested in some of those topics in them here in the presentation. I advise you to have a look at this list of the relevant state of art, you know, for more information about those topics that I have been discussing today. Thanks a lot actually, thanks a lot to Microsoft for awarding me the AI for Earth Microsoft Azure Combi to grant on the form of credits for consumption, for resource consumption, for more my project titled supporting highly efficient machine learning applications for reducing the impact of climate change on human health and metropolitan cities. Thanks a lot for attending my talk and that's it. It's a question time if you have questions. Awesome, thank you so much. Yeah, it's really interesting, you know, I was talking earlier about sort of sharing up the foundations of how we can access data and combine different data types. You know, the fact that these sort of distributed spatial queries and joins are still, you know, being worked out, I mean speaks to the amount of work that we need to do to be able to combine this type of, you know, mobile data and climate data and how important that is. So thank you so much for presenting. I don't see any questions in the chat. So yeah, we'll end the session there and I just want to say thanks again to all our speakers and we'll catch you again in the afternoon session. Thanks so much. Thank you, thank you so much. Bye bye. Bye bye.
|
Current research is focusing extensively on building Cloud based open source solutions for big geospatial data analytics in Cloud computing environments. Massive amounts of geospatially-tagged movement and micro blogging data are collected and analysed regularly. Nevertheless, movement data per se is insufficient for uncovering the possibilities for decision-informing analytics which could help in reducing the undesirable effects of climate change. For instance, answering advanced queries such as 'what are the Top-5 districts in Buenos Aires capital city in Argentina in terms of vehicle mobility data where the index of Particulate Matters PM10 is greater than 50'. Other equivalent queries are required for assisting the strategic decisions regarding health-focused smart city policies. For instance, for insightful analytics that help municipalities in designing future city infrastructures that prioritize the health of citizens. For instance, by lowering the number of vehicles that are allowed entering into highly polluted zones in peak hours of the day. In addition, this information is useful for designing mobile interactive geo-maps for city lightweight dwellers in order to inform them which streets to avoid passing-through during specific hours of a day to avoid being subjected to high-levels of vehicle-caused air-borne pollutants such as PM10. However, answering such a query would require joining real-time mobility and meteorological data. Stock versions of the current Cloud-based open-source geospatial management systems do not include intrinsic solutions for such scenarios. Future research frontiers are expected to focus on designing geospatial Cloud open-source systems which allows integration with other contextual data. In this talk we will show case some of the few available Cloud-based big spatial data management frameworks and how they can be utilized as springboards for further development so that they become mature enough to support the decisions that aim to mitigate the mobility-caused climate change problems. We will walkthrough an example that shows how we could tweak an open-source geospatial framework for answering such multi-domain queries. We will conclude the talk with short discussion of open research frontiers in this direction. For instance, because the arriving data which needs to be ingested is huge , approximate query processing should be considered a priority, thus summarizing the data (for example histograms), could be even before forwarding it through the network to a Cloud-based deployment (for example, injecting samplers on the Edge devices near the source of data).
|
10.5446/57438 (DOI)
|
Hello everybody and welcome back in Barilo. Our next speaker is Cristina Vincerno. She's a postgraduate researcher with the University of Nottingham, where she works with the remote sensing. But today she's going to speak on behalf of Geospatial.org, which is the Romanian chapter of OZGEO. Her topic is community mapping of the COVID-19 pandemic in Romania using phosphoG. But unfortunately due to scheduling conflict, in fact she's chairing another session in a parallel. She will not be able to present this live. So I have a pre-recorded audio that I will play for you on this topic. Hello everybody, my name is Cristina. I'm a PhD student with the University of Nottingham. And I'm talking today on behalf of Geospatial.org and volunteers. And our presentation is going to be about how we use phosphoG to track the COVID-19 evolution in Romania. So first of all, I would like to introduce our team. We are 32 people in this team that created the platform, the COVID-19 platform. Some of us are volunteers from the Geospatial.org organization, which is an OZGEO chapter. Some of us are just volunteers that wanted to help during the initial stages of the pandemic. And everybody came together after the call that we had in March 2020 when we started the platform. We knew kind of, it was very obvious with the situation that was happening back then that we're going to have a pandemic situation. So we're going to go into a lockdown based on where you can see in Italy or the rest of Europe. So a lot of us wanted to help somehow. And since we didn't know exactly how to do it, because we were not medical experts, we came together to help with the skills that we knew we have. And those are mapping and with additional skills. Some of us contributed to communicating, some of us contributed to just collecting the data, creating graphs, technical skills, setting up the platform and the technical background, or just advising with different information about health and so on. So it's just the 32 of us. Some of us are still doing it. Some are contributing less or have departed, but it's been a very collective effort. And to give you some insight of what this effort means and why we started everything and how we navigated all this period, I'm going to give you a quick trip through a timeline of this COVID-19 pandemic in Romania. So the most important steps that we encountered. The first one is when we had the announcement of the first Romanian COVID-19 case. That happened on the 26th of February 2020. It was already after Italy was in a very bad state. So that was our first case. We were expecting it. And we knew at that point that it's going to come to Romania as well. And we would have to take some, we will go into restrictions, we will go into a certain state of emergency alert and so on. So that is actually what prompted us to think about contributing. And as I said, we are not medical advisors. We're not medical trained. So our training is in mapping. So we started to put together this idea of creating a mapping platform also because we were looking at what was happening worldwide, all the platforms that popped mapping the cases and so on. We were looking at platforms that were completely open and platforms that were not showing a lot of data, governments that had different approach and so on. And we were also thinking of what will be the approach for Romanian authorities. Up to this point, we could actually locate every single case with a lot of detail. We knew the names of the people, we knew who they got in contact with. And we could actually do this very nice graph of relationships between the different cases. But obviously that was not the situation that it lost a lot once the cases were increasing. And indeed the cases did increase and that prompted the WHO general director to declare COVID-19 a global pandemic. And that also prompted the Romanian authorities to declare a state of emergency once the number of cases grew uncontrollably. That was on the 17th of March 2020 for Romania. And it was not only a state of emergency and a lockdown, a national lockdown, but it was also a military imposed lockdown. And that was very important because if up to now we could learn about the cases, we could learn details about their names, we could learn details about where they were about. Obviously these cases started from all these little details to be less detailed. Let's say we knew that there was a case, we knew the gender of the person, we knew the age and so on. At some point we knew that they were institutionalized, they were in a hospital and then they were moved in another place. And that was already a little bit blurry. Well, we went into the state of emergency and on the 19th of March 2020 we went into local secrecy. What local secrecy means is that all these detailed data that we had up to that point, that we were collecting either from media or from local authorities, so local health authorities, local administration and so on, which was at the county level, we could not do this anymore. So the military administration said stop and we would go at the national level and we would only receive once per day a summary report in which we're getting only one number at national level, one number for the number of deaths, one number of cases, number of recover. We created a huge gap because we couldn't map it anymore, we couldn't have this detail at county level. And besides that, we couldn't locate the cases, we couldn't do that relationship graph anymore. We couldn't follow what was happening to the cases. We didn't know if these were cases, fresh cases from the day before. There were cases that got reported one week ago and they're basically delayed up to now. And for some of this data, we couldn't find it anymore and we still can't find it. So even up to now we don't know where those people vanished. And besides that, we were forced to put together all these mechanisms to find this data. So we went into looking into a lot of data sources, what we could find from local authorities, what we could find from the media. We had a tool that helped us to scar the media and so on, but that was a lot of manual labor. It was really helpful, the fact that Romania and GDPR has a very strange relationship. So sometimes you would get some information in the media that you would not get otherwise, but anyhow it created a huge gap in the data set. We got back to details on the 2nd of April. It didn't last too long, the secrecy, but we still got truncated details. Obviously they didn't return to the same level of information as before. And furthermore, all the information that was delivered was extremely unconstructed. So besides the fact that we could not find all the information, it was incredibly unstructured. Coming from different authorities in different formats. Sometimes they were just written notes and sometimes it's just PDF or Verian Structured Excel and so on. So that posed further issues. On the 15th of May, finally the lockdown ended. The national lockdown ended. Romania kept a state of alert, which is still ongoing. And from that point onwards, a string of chaotic measures or measures that you could find everywhere else in the world started. So basically we had these times in which we got either schools opening, closing, restaurants opening, closing and so on. And the most notable moment was the electoral moment in which we had to go to voting. And that was very interesting from the point of view of the authorities because they relaxed the measures right before and the restrictions got back afterwards. But you could definitely see a rise in the number of cases. We got to another milestone this year in March when, actually in February when they published the first data set of open data. Which came from the authorities directly. So if up to then, that moment, we had some platforms that were getting some data from the authorities. We never got open data from the authorities and you've seen all these secrecy and so on. 15th of February was a milestone because we got this first date of the open data and we actually got to discuss about it a few days later in March at the Open Data Day. Because this data set was not perfect. It was full of unstructured data, let's say. It had a lot of shortcomings and the citizen initiatives that were mapping the pandemic. The NGOs that were involved in this got together with the government at this first discussion about open data and about this data set specifically. And we expressed our additional needs, what we would like to have in this data set and how it should minimally look like for us to use it in a very streamlined way. This got very political very quickly and it didn't last long. And the health minister, the one that initiated all this open data publishing, got dismissed. And that came with the fact that for a while the data set was not updated anymore. So even the short glimpse that we had into what data the government has disappeared quickly. Coming back to our platform, I want to reiterate that this is a collective effort. It's community volunteering. We have 32 people that brought into this a lot of skills, diverse skills. And it's not only us mapping NGO, just special NGO, but it's also people that had no just special knowledge or no training, but got together with us and we're collaborating on producing this platform and this data. As I said, we're looking at multiple data sources. We looked at local authorities. We looked at national authorities. We have this communicate that we're with this report that we get from the authorities, but we also look at different other data that we could use to get something more in sight. So because of the nature of this data sets, the fact that they're being spread among different sources, they also come in different formats and so on. So sometimes it's a lot of unfiltered data and unstructured data. And we have to do all the manual or automatic where it's possible, filtering. Sometimes, for example, in reports you had this PDF that was unable to, we were unable to perform any character recognition. So we could extract the data automatically. So we had to do it manually and so on. And because of all these struggles and all this secrecy from the government, we issued this open data manifest, which I put here in the presentation. And this open data manifest is basically what we intended to be an outcry from the citizen initiatives towards the government. And it had some echoes as you've seen, they did something in the end, but it's not perfect. We do this collection of data every single day since last year in March, we've been collecting data daily. Daily is covering the media, not collecting data and daily trying to make it usable and put it into, to curate it in a way that it can be used by other people. We try to constantly update and improve our platform. That means that it's not only about the data that we have and the way that it is structured and the way that we curate it and the way that we look for additional information, but it's also the way that the platform integrates all this information. We use open source technologies to build our platform on and a variety of useful products that go into there from different technologies to different data sets that are additional to the health information that we get. And all our data is freely distributed, it's open access. We share it with the community, we share it with whoever wants to use it. And we also collaborate with some other initiatives to support them and they support us. Our platform is based on open source technologies that we have a backend that is no GS, PostgreSQL and PostGIS. We also have in the backend we collect the data for Google spreadsheets and all the posting for our platform is supported generously by the Sage group from the West University of Venezuela and that goes through Mozambique web services. The front end however is on multiple technologies because we had multiple skills. So some of us are more familiar with R in Chinese, some of us are more familiar with open layers, Angular, so basically JavaScript, or Python and closely and so on. And we also have some of the maps that are based on Google App Engine with subsequent data sets and Carto which generously supported us through their coronavirus program. As I said, all data is open for access, we distribute it through CSDs and whoever wants to integrate it has also dedicated API to do so for each data set. All the data that we have inside, so we have health data that we collect from authorities, that includes active cases, new cases, testing, death recoveries, vaccines and so on. We also have data that we collect from press releases or from the news, from the media itself, so we had that tool that was developed by Casa Journalist Rui which translates to Journalist House. It's an independent journalism outlet. And they helped us with this tool that helps us to look through a lot of newsletters and Carto media for additional information. We collaborate with Dragovanan from drafts.raw to calculate the R-Rite. He's a statistician. Sorry. We also have the health infrastructure, marginalized communities and their access to health infrastructure and some European context to date. Other data that we include is European mobility to understand the mobility patterns, points of interest. Other mobility data that is national from Apple, Google and Waze, air quality that we correlate with Sentinel-5P from the COMPARANICUS program and AIRLIVE which are ground-based sensors. We looked at voting and elections data, public interest in the COVID-19, sea water temperature for the summertime and basically all the indicators for impact, which is agricultural tourism, demographics, real estate, automotive industry, economy, communication and media so we could actively monitor how COVID is impacting all these sectors during the pandemic in Romania. So we could see through the data when we had different restrictions, what kind of impact they had or when we had an increased number of cases, what kind of impact they had. We got some media support as well. We got into some national news outlets and also sometimes they just reported using our platforms, sometimes they embedded and they collaborated with us. We got also in the international aggregators, news aggregators and so on, some support from Twitter community and we even got into research. So there are research articles that use our platform to get some insights and help their research. I'm just going to quickly get you through a demo so we could see our platform. So this is the dashboard when you enter. We have the number of confirmed cases, active cases, the new cases in the last 24 hours, whoever has been recovered, deaths, county level, locality level, so basically very granular data about number of cases, county level cases and quarantine areas. All this is summarized in these graphs here. We also have an English version that is not complete though. We have some maps on different thematic. So this is health infrastructure, we have a voting presence, European context, these are supported by the corridor. We have this map of a new two concentrations. And we also have vaccination map and a new statistics related to health indicators to help variables and also environment, public interest and so on. These are all very unstructured just because they come from different people, everybody contributed in a certain way, but we are working to make structure. And finally, we have this impact graphs that we call it the odd right now that measure the impact of COVID-19 over the Romanian society in general. So different indicators from county communications, populations and so on. Just to come to conclude, I'm very thankful that you follow this presentation. If you have additional questions, we are very open to answer them. We've got the contact here. And basically, we're looking forward to hear your thoughts about what we're doing if you have any suggestions for improvements and so on. I just want to thank again all my colleagues for doing this work for the last year and to really appreciate that we got some good appraisal from the public for this platform. Thank you very much. I hope you have a great post for you. Okay, really a pity that we don't have the opportunity to ask questions about this interesting presentation that was actually the last during this afternoon session. And for those of you who are still feeling we're continuing phospho G even today, there is the social gathering. So encourage everybody to show up there. Okay, have a continued nice day.
|
In this talk, we aim to present the geo-spatial community efforts of visualizing the geospatial spread of the COVID-19 pandemic in Romania since its beginning in March 2020 until the present time. Geo-spatial.org’s Covid-19 app, built entirely on FOSS, delivers correct, complete and updated official information on the virus spread in Romania. Our platform contains several maps and graphs depicting the different dimensions of the pandemic in Romania, ranging from confirmed cases/deaths/healed patients and related statistics, to hospital infrastructure, quarantine zones to pollution/mobility indexes, as well as other impact indicators of this epidemic in our country. Unfortunately, the Romanian authorities have failed in communicating the evolution of the COVID-19, resulting in numerous glitches in different reports. Thus, we have been volunteering our time to collect detailed information from the local/national media, compare it to the official reporting, sort it and deliver it in a structured manner. One year in the pandemic and the situation has not improved. With a notable, but a too limited exception, the Romanian authorities have not changed their opaque policy of interacting with the community. Thus, the COVID-19 geo-spatial.org team has continued to support the data mapping efforts and the open data community. The application is built using Node.js, PostgreSQL+PostGIS, R on the backend and OpenLayers, Angular, charts.js, Plotly and D3.js for the frontend. The source code is on GitHub, MIT licensed. The infrastructure is supported by Sage Group on AWS and by Carto.
|
10.5446/57440 (DOI)
|
We have created focus from data analysis to visualization on the web. We have a lot of experience with open geodata, open street map, artificial intelligence, post-it, push-it, map box, everything. We love to share knowledge through workshops and talks. Go ahead, feel free to start when you are ready. Thank you very much for that introduction. I also have my presentation ready. You can share the screen. Yes, so welcome everybody. I'm from the Netherlands. So for me it's good afternoon for you. Maybe it's a good morning. As my introduction said, I'm a freelance web developer. I make a lot of maps. I make a lot of applications with maps in them. I notice a lot of things. Oh, my camera is not working. Can you still hear me now? Yes. I don't know what happened. I'll just continue. In my work field, I run into a lot of maps. I see also a lot of applications with maps, a lot of atlases, with a lot of data in it. I've noticed that there are some stuff that are working very well, but there's also some stuff that I can think that we can do better. I work with a lot of web developers. I also work with the clients. I also work with the users sometimes. I notice that it's really hard to communicate with both of them. Let's put it on a map, which is a phrase that often confuses me as well, because I notice that non-GIS people use this phrase, let's put it on a map, and I notice that there's a lot of stuff. Recently, somebody said, let's put these privacy issues on the map. I was like, how are you going to put that on a map? I realized they don't literally mean to draw something on a map, but they mean draw attention to a subject. That's also what I wanted to do today. I want to draw attention to the subjects on how we put stuff onto the map. I want to tell you how to design this in a cuddly, eye-blinking, digestible, and accessible way. Just to have a little bit of expectation management, I do not have the answer to this. I also think that 20 minutes is a way too short to teach you how to do this in a good way, but I do have a lot of ideas. I do have a lot of experience with this, and I have a lot of examples. I also want to take you through some of these examples today. Examples that I came across in my work field, examples that I saw online, and there are going to be some very hands-on tips that you maybe could use when you make a web map. There are a lot of maps out there these days. I think also with the open source software available, with all the open data available, everybody is able to make web maps. I see web developers these days that don't do anything with GIS that make web-based applications. There are a lot of maps out there. When I talk about this today in this presentation, I already used a lot of terms like a geovisualization, like a geo dashboard, an interactive map, map-based application, cartographic interface, and I think these are all correct. I really want to talk about a digital interactive map-based application where the map is the main focus of application. When we look at what you need to make such a map-based application, I came across this really great framework for amazing maps by small multiples, and they really define it in a nice way. They said you have to write context, you put relevant data on top of it. Then we have some useful controls and we have intuitive interaction. I really loved it at first glance, but I think they still missed something. I define it like this. If you want to make a map-based application, there are three main components to it. We have to do the cartography. Maybe in this framework they call the right context. We have to design a background map or a topographic map. We have the interaction design. Everything that comes with the interaction on the map. What they miss, I think, is the front-end design. Next to a map we often also have a header. We have some buttons, we have some layers that we want to turn on. Maybe we have a search bar. Maybe there are some extra panels showing some extra information. We really need to think about how to design this as well. In the next two slides, I'm going to talk about these three steps. The first two are going to be a bit short, and I want to pay a little bit more attention to the last one, because I really think that there is a gap in designing the interaction design. We heard already a lot at FosserG, I think, about vector tiles. I think with the coming of the vector tiles, there is this whole new kind of cartography going on. It's so easy to style. At these days, there is so much possible. We are able these days to make a unique map. We are able to customize our map to our applications, to the company that we are making the maps for. Why not do this? I really once had in one project, I was working with a front-end designer, and they literally just pasted an image of Google Maps into the design. I tried to find the file on my computer, but I couldn't find the design anymore. They literally just pasted it in Google Maps, like, this is where the map is going to be. That was it. They designed the whole application, but they forgot about the map. Cartography is very important, and I think we can do so much fun stuff with that. For example, I really love this one, a statement designed to Facebook map. They have this whole blog post on how they made the Facebook map, the cartography, and how they integrated the colors of Facebook into the map. As they say it, the map sits more seamlessly inside it as a larger project experience. The colors, they choose for the land, for the water, for the place labels, really give the map a Facebook aesthetic. I think that's super important. This is how the map looks like. Also, the blue has this Facebook kind of blue. Making your cartography fit with your brand identity is something really nice that we can do these things. Next to just making the map integrate with your application, we also have to look at the map still in itself. One other tip that I really wanted to tell you today is leave out all the clutter. Just visualize on the map what you need. For example, this application shows you municipality data, but they plot it on top of a topographic map. For showing the summaries of municipalities, we do not need all the topographic labels. Maybe we could just not show a background map at all, leave out all the labels, like for example this application, which also shows municipality values, but they don't even show you a background map, they just show you the values. That was really, really short on cartography, because I think we all agree on how important that could be. Let's have a look at front end design. Actually, I just have one tip for the front end design is hire front end designer. This is from my own experience, because I developed an application last year, and I started out on my own and I thought I can do this, and I can develop this web application that looks really nice, different, easy to use. This is what I designed. This is how the application looked like. Then we got a front end designer in our team, and he really, really quickly turned everything around, and he made it into this. This is how the application looks like today. It looks much more calm, and just to point out some stuff. For example, the header in the top, he really removed that, so the header became really small. He said he wanted to give more space to the map, so there's actually more space to the map than in the old application. Then also he put the search bar at the left panel, which was here, to group together all the mapping, all the map functionalities that you do. You want to look for location, you want to turn on a layer, all grouped together. Also, he reduced the amount of text in the application. For example, the bar on the right is where you could configure your map look. You can turn on labels, you can choose top graphic labels, or you can choose administrative labels. You can put on boundaries, and he just turned it into this really small bar at the bottom with less text. I also love it that he made this transparent bar at the bottom, so you can still see it's part of the map, like you can configure the map. Putting stuff together, which go together, drawing attention to the stuff that is really important, these are really good lessons that I learned from him, and I was never able to do that on my own. Higher front-end designer, but also don't let them do everything themselves, because the front-end designer would have never been able to design the map in the brand's identity colors that it is now. So we came to my last part, but I think it's the biggest part, and I have some great examples again. Interaction design. So what is the interaction design? Interaction design is designing how a user is going to use your applications, what they have to do to manipulate your application. We want to design this in a way that the tasks that the user have to do are easy, and they can do them with minimum effort. So it must be really easy to access, really easy to understand, and also don't make your user get frustrated. You want to give them a pleasurable experience. Maps have also become increasingly interactive, and I love it that Robert E. Rod gave this definition of a cartographic interaction, because I do believe that interaction on websites is slightly different than cartographic interaction, and cartographic interaction is everything that has to do with the map, everything that is going on on the map. I worked with web developers, and they don't know anything about this. They don't get the GIS part, and then I worked with GIS people, they don't know anything about the web design and interaction design. So there's a gap, and I think we should really put this on the map and draw some attention to it. So cartographic interaction. And then there's a distinction between direct interaction and indirect interaction. The direct interaction is what Robert E. Rod says is depending on the zooming, and this is already old find in the mapping libraries out there, so I'm not going to talk about that, depending on zooming. It's all there, and it works fine. It's more about indirect interaction. So when I push a button, what happens with the map? When I look at the search bar, what's going to happen with the map? So everything that the user can do to manipulate the map. And my first example, it might sound really obvious, my first example is action reaction. If a user clicks on something, they want to see a reaction. So I click on a button, I want to see something on the map, and this might sound really, really logical, but I've seen so many applications where this goes wrong. So we want to, I'm just going to show you. So for example, this application, the dcco-card, as you can see on the left-hand side, I'm pressing some layers, but nothing is happening onto the map. And this is, your user could get frustrated. They could think like, oh, I'm doing something wrong, the application is broken, and you don't want a user to feel like that. What is actually going on? For us as GIS people, maybe it makes a lot of sense, these layers are not available on the zoom level. Because it's such a detailed data, it's only available when you zoom in. So as you can see in the movie, if I zoom in, then you can see the data. But a user, and especially people that don't know that much about GIS, might not even know this, and they get a frustrating experience using your application. So let's prevent this from happening. And I have some examples on how we maybe could prevent this from happening. So I really love this solution, and this is a map made by GeoGaP, some friends of mine are working there, and they made this panel in the top that literally says zoom in to see the parcels. So they have an application where you can view all the parcels on the map, which are only available when you zoom in. And then they also added this progress bar, that you can even see how much further you have to zoom in until you see the parcels. And whenever you see them, it disappears, because then you don't need it anymore. I think this is a really nice solution to give your user an action on what they should do. So another solution, and this is one that I made myself, is when you click on a layer, which is only available at a certain zoom level, and you're not looking at a zoom level, the map automatically zooms in to that zoom level, and the data will show up. So that if a user zooms out, then they consciously zoomed out, so they know why the data is disappearing again. And I also found another example, and this is an application that is not always working really well, but what they did well in one way or another is that the layer is not visible at a certain zoom level. They also make it impossible to turn on the layer. I think it could be a solution, but maybe it's still not the most beautiful solution, because there's still something that's not working. So a user might still get the idea that the application might be broken, or that they did something wrong. So yeah, I hope that you could give me maybe more ideas on how we could solve this, because I think this is really a cartographic interaction problem that we should solve with smart solutions. Another thing that I come across sometimes, and what really, really first stated to me, is what's going on with all these layers, and maybe you already saw it. There are tons of applications out there in which I can turn on all the layers at the same time. For example, here there is a lot of data about energy use, about the environment, and I can just click, click, click, click, and the map gets this one big mess of data on the map. And especially it gets confusing when you don't see anything happening on the map and you continue clicking, and as you can see also the legend is just getting so big. Also another application where they even have a fully covering data layer, so the data layers are always covering all of the surface of the Netherlands, so it doesn't make any visual sense to show all the layers at the same time. And again, you have a bunch of legends on the right hand side, and as a user you might get frustrated, you might get confused, and we don't want that to happen. So in my opinion, I would say just show one layer at the same time. So in my application, if a user is clicked on another layer, the other layer turns off, also the legend is changing, so you can really, really see that you're looking at a different data layer. And I often also get the question from users, hey, but I want to be able to compare layers, why can't I turn them on, and why can't I turn on more than one layer, and I always reply with them, like, okay, if you want to compare two layers, like which two do you want to compare, what is your purpose, and how can we solve it? And I never actually got an answer, so everybody asks for the functionality to turn on multiple layers, but nobody can tell me why they want to do it, or what they want to reach by doing it. So another solution might be if they want to know two or three attributes at the same time, for example, in the building or in municipality, we could add a pop-up, or we could add a panel showing multiple attributes, plotting them visually on top of the map is not going to give them any more understanding about the data. So for me, it's showing one layer at the same time. And that also solves this problem of this endless going on of a legend, because then we also can share one legend. And that got me started to think about legends, and actually I almost thought, let's do a talk only about legend, because there was so much that you can say about legends out there. So I just picked a few examples on how we maybe can improve a legend on a map. And again, it's an interaction thing as well. I found this application and you have to press this, show the complete legend before you can see the whole legend. And do making a good user interaction design is about minimizing efforts or reducing the amount of clicks that a user has to do. So I think this is really a pity, because as you can see, there's a lot of space on the right-hand side. Maybe we could even put a legend on the map and show the complete map, show the complete legend. Even especially when the legend is just really important to understand your data, I don't get why we should hide it. And also with everything which is possible these days online, I think that using the interactivity of the web page can also be part of your legend. So why not make it a little bit more interactive? So I really thought this was a smart solution. It's about earthquakes, volcanoes and emission, and they show just the three different symbols. And then when you mouse over on them, you don't even have to click, it's just a mouse over action. You get a little bit more information on what it means and what the size means. And I think this is a great way to solve your design, let the legend not take up so much space, but make it informative as well. And let's just take it a little bit further. This is the opportunity at last, and I really, really love what they did in the top corner. So the legend is also a graph of the distribution of the data, and then even the black dots is representing the black, the country that you clicked on. I think, do you think it's a pity that they show a standard legend in the bottom, and then you really have to open up the distribution graph. But I really love the way that a legend can become so much more. So the legend itself can be used to explain the map, but it's also an explanation in itself. And this really makes a map, a web map really interactive and really being able to manipulate it and understand it better. And why not even draw a legend and make the legend part of the application. And I really love this one. It's a map about refugees. And what they did here is they didn't just show like two blocks of red and blue and then with the names behind it, but they integrated the legend into the application. With this bar at the top, which is also at the same time a toggle. So you can toggle from the origin country to the asylum country and then the data changes. So the legend is choosing your layers. It's a toggle. It's so much more than just a legend and everybody will understand this when you open up this application. It's just makes so much sense. So why not even don't make a legend, but try to integrate the legend into the sign of the whole application. Yeah, and that was all of my examples that I have. So in summary, there is so much possible these days when it is about developing web maps about developing applications on lines. So really use it. There's no excuse anymore to say, oh, but I use the standard software and I cannot do this. Everything is possible to make a better user experience, a better web map for your users. And really think about how we can integrate the cartography with the front end design and the intrader int interaction design as well. We really need to integrate all these things to come to one application that looks as an integrated project and people like to use and really understand how they can use it. And work together with all these web developers that already have great knowledge about how they have to develop web applications. Work together with designers and how they design the applications, but always, always be part of it and keep into mind that there is this GIS way of thinking that we need and that we always should think about the map and how the map is interacted with. So thank you so much for listening to my talk. If you have any comments, if you have any other ideas or any other great examples, just let me know. I'm curious about what you think about this. Hello and thank you for your presentation, Nien. It's been really interesting and useful for all of us. We were praying that the maps we created wouldn't appear there. And yes, we have a lot of questions and a lot of interaction. I don't know if we can answer all that we can start. I will be there. Do you have information where I can read more on this topic? Oh, yeah. I think I would definitely advise you to read literature by Robert E. Rod because he does some, he has some papers about the cartographic interaction. Perfect. And he also did some great presentations about that. Perfect. Thank you. What can I do when I have a lot of layers to show? I always have that problem. Well, I don't think it's bad that you have a lot of layers, but just show, make sure that the user can just feel one at a time. Unless there is a really good reason to show two at the same time, but then also make sure that if you show these two layers at the same time, the visualization of the layers are adapted to each other. And I think that's the problem. When you have like 50 or 100 layers, you cannot make sure that any combinations of these layers together on a map will make visually sense. So then just don't do it. Don't show all the layers at the same time. Just make sure the user can just click one layer at the same time. Perfect. Thank you. And there is another one. Does the platform have online analysis or just a carto meaning the analysis already finished? In my application, there is not analysis on the application itself. It's done beforehand. You can do, can look a bit further into the data. So I don't, I'm not sure which platform they mean, but maybe I hope this is an answer. Yes, maybe. Okay. And here's another one. How about using a center of the polygons and show that when they zoom out and then turn off center when polygon became visible? Yeah. It's a great example of showing data on different zoom levels. That's also a really good point when we think about the cartographic interaction is also about on different zoom levels. Maybe you want to show different abstraction levels of your data. So on higher zoom levels, not even show the polygons until you zoom in. Yeah. Perfect. And how do I become a better web map application designer? What is your background? Do you know a training course that you could suggest? I don't know. I don't know any training yet. No, maybe you can do it. You can do that. Yeah, I studied GIS, but I just my personal interest on how to make web maps and I've just been busy with it for the last few years just thinking about what it means to make a better application with a map because there is nothing out there and also with working with web developers that don't know anything about maps. I just noticed like, oh, there's something that I should add to this because they don't think about it. I really have to teach them like with a map works like this. So don't forget that we have to zoom in or if you click you center and yeah. So I don't know just through experience. I hope there would be like some good trainings and courses on this because I think it's really important. Especially when a lot of non GIS people these days start making maps. You can really see they forget about that. Okay, thank you. And what do you think of printing map from a website PDF? Well, yeah, I don't know. What do you want to do with it? Yeah. It's good. Yeah, of course you can do it if you want to put that on your wall. I don't know. You are free today. You're free. Here's another one. Love the idea of the zoom lever bar. Easy, open source and available to use. I don't know if they have it open source. I do think you can make it pretty easy because a lot of mapping libraries have these map on zoom functions or so you can or map get zoom functions. I think it will be really easy to build this component. Yeah, and it depends on which framework you're using. I think also. Yes, that depends. Okay. And what tools do you recommend for creating and creating such interactive legend integrated into the application? I really love D3. So D3 is a data driven documents JavaScript library and I love using that for making these legends and combining it with it. So I think you could do it with that, but I think you could do it with JavaScript as well. Okay. D3 for me is D3. Everything is D3. Okay. Thank you. And the last one because we are going to start the next talk. Do you have any favorite books or other resources on geodata, beast, cartography and maps you use experience? I have a lot of books about information visualization and graphic visualization stuff like that, but not for UX in cartography yet. Okay. Okay. I think there's one book on this topic. So there's some about cartography, some about web development, some about interaction design, about data visualization and you read them all and you have to combine it yourself. Okay.
|
Too cluttered visualizations, confusing jargon, scary technology and communicating with the geo-data illiterate.. For many people, GIS technology is hard to understand. So how do we design and build a map application showing a huge amount of geo-data accompanied by the elaborate functionality to discovered it? Adding more and more buttons, layers, panels, pop-ups, legends, draw tools, scale-bars and GIS terms makes an application confusing, scary and technically hard to understand for the crowd.. These days we have an incredible amount of (open-source) geo-spatial data, remote sensing data and insights, plus the tools to share them with the world! But how do we communicate with an audience that does not understand our jargon like: “CSV” or “polygon”. Let alone know that the term “heat map”, means something completely different then what a cartographer means. I often find myself confused and tangled up in conversations where, as a GIS and cartographic professional, you speak a different language then your clients. Also working together with other web-developers and data-designers, shows me how we have to educate and direct them into our GIS world. Join the discussion about how to make user friendly, eye-blinking digestible, (non-technological scary), geo-data visualization for the web! I will show you some examples and share my opinion to get you started on thinking about what it means to share our geo-spatial insights with the world. We will follow different perspectives of the geo-data illiterate and get inspired by data visualization techniques, web design and interaction design.
|
10.5446/57441 (DOI)
|
Hi, all. Welcome for everyone. Welcome to the Friday session, Salta Rooms. Today we will have a lot of presentations. The presentation is divided in two parts. The speaker will talk about 20, 25 minutes and then we will have time for questions. The first speaker will be Benoí de Meso. He will talk about 3D Euroband data in QGIS. Benoí de Meso works for Auslandia on complex GIS architectures and on the development of additional functionality to QGIS. He's a software architect and has a PLT image analysis. Benoí, go ahead. Feel free to start when you're ready. Great. Hello everybody. We'll talk about current working progress, about 3D Euroband data in QGIS. Next question is what you... Sorry, I got some echo in my ears. So I can't say that. I'll be back. So what about using 3D QGIS system? Can you mute your Benules and put only the string there? Can you repeat? Because I got an echo. Can you mute your Benules and put only your... Close your Benules. Yes, close your Benules. If not, we will have problems. Only a string there. It's currently closed. It's working wait. Back again. I'm so sorry. We'll talk about... Benoí, can you close your Benules? Benules? What's my... Yes, the part... The browser that you have open it there. That... Please close that. That's perfect. Now you are on... Okay. Sorry. I'm going to start again. So excuse me. Make problems. So I went to talk about 3D QGIS. I would like to talk about the need of 3D QGIS system because mainly they are done for 2D maps. But sometimes we have some complex data from 3D datasets and we like to display them to enrich the 2D maps or maybe to have some kind of heaven simulation or whatever we can do to enrich map and visualization. So complex data sense means we have to handle some kind of data from buildings, for example, inside buildings, outside buildings with interiors, maps, textures. And we have also to handle... We can handle CT data on like a dry holes also like we have to do in another project. We have to represent dry holes in QGIS. So these complex dataset may have some specific needs because they are huge, heavy. They cannot be load at once in QGIS. So we have to introduce some smart data loading system. But by the way, they have to be also pickable and still able. You should have to be able to select an object with some attributes and change the stills and the color or whatever you want. In the case we have two big dataset, the dataset may not be usable like that. So you may need to slice it into smaller parts. And we will project that slice to the view to inspect and have a better understanding of the dataset. But there are so many formats in a kind of object we can display in 3D. We can display some point cloud, we can display meshes, we can display objects, previously modelled objects. And we have to be able to handle all these formats. But each format has its own programs, own difficulties to represent in this specific usage. By the way, if you are looking for a cloud point representation, we have some more data but you cannot really use it because you cannot pick an object, you cannot see change of texture and you have no faces, you cannot do many things. Some evolution of the dataset is using some mesh dataset where here we have some wall, only wall mesh, textured one. You can pretty well see all the buildings but you cannot pick buildings, you cannot retrieve attributes on the buildings. The next stage is having some remodelled objects of this dataset. But in this case you have to have textures data. This big work came from cloud point to this kind of dataset. But you have to handle all this kind of dataset. In truth, you would like to handle complex to distance. So to handle all this kind of data, we need some format, some specification. The dataset can be able to manipulate level of detail information. Information for specific data, then you go closer, you have some information, other final information, and then you go closer on again some final information. We are only focused for now on EFC and in the BIM format. But this is a future work, we are still looking how to do it. And now we will talk about 3D tiles, the main format we are focused and try to integrate into QGIS. But what is 3D tiles? 3D tiles is a format specification done by Cesium for them. Which means this is a format done for web application. And as it is a web application, if you want to display a big dataset, you can do it at once, you can't load all the data at once. So you have to use in the 3D tiles dataset specification and things like that. It's a YAR-HECOL dataset. All the dataset is described in GISN and dataset can contain datasets. So all the descriptions of datasets are GISN as I already said. But all the mesh data, the texture data, the 3D data are then defined in binary file in the GLTF format. Also as Cesium is only working in the web application on the EarthGlob, they use a specific dedicated, not dedicated, but specific projection. This projection is a centred in metrics. And to have a point, to describe a point on the Earth, this projection, you need to have explicitly the three coordinates, X, Y and Z. But what about 3D and QGIS? QGIS has introduced 3D since the 2.18th version, in the mid-70s. And QGIS was designed for 2D maps. It's not a well-done viewer for 3D data. By the fact, if you want to see your data and manipulate 3D objects, you have to go through the dedicated 3D view. If you want to see your 3D data in the main view, you have to have QGIS with project Z. In 2D, in the bottom view, up to bottom view. But even the second case, we can some partial addition in the mesh, in 2D and half addition. All the work you can be made in QGIS in 3D is based on QT5. QT5 has a specific 2D engine. And all the work is done by QT parts. But there are some drawbacks in using QT and QGIS. Because as I said previously, QGIS is mainly done to use 2D coordinates, say, longitude plus elevation. In many parts of the code, you can have some drop of the Z coordinates because they only consider longitude and latitude. In the kind of geodesic centroid projection, this does not work. Cannot just remove the third component of the coordinate and be happy with that. You have a non-working point. So yes, there are some changes to be done in the core of QGIS. Although there are many improvements that have been done in QT6, and we have some drawbacks in using QT5. The 2D provided by QT5. But for now, the evolution from QT5 to QT6 in QGIS is not in the roadmap for now. We have to continue to work with existing drawbacks. Over the top of QT, you have all those software related provided by QGIS. But there are many things that will be done in QGIS for the 2D. In the 3D view, you have the ability to do a selection of 3D objects. You can change the style. We have a view of 3D building with conditional styles. And you have the ability also to apply some textures to 3D objects. So there are many, many things that have already been done. And the main point that may interest us in the 3D type supports is that there is already an LOD system defined in QGIS to handle the 3D dataset. So what do we need to do to integrate 3D types in QGIS? To do that, we have to provide some way to load the dataset, the hierarchical dataset, to map it onto the QGIS LOD systems. We have to be able to handle properly the GLTF files, the type of files, because this is where the mesh data, the texture data are stored in the 2D type specification. As in QGIS, we are not working on a globe. We are working on a flat ground. We have to do some work through a project. The data are from 2D types, which are defined in 49-78 projection to another projection, say the 28-57-1. We're still in metric. Also we have to bring some support to be able to use the style in selection of objects that already exist in QGIS to the 3D type support. So what we have done? We have created a QGIS enhancement proposal to use to... I don't know what to say about it... To propose to the community the evolution of 3D tiles in the integration of the 3D tiles into QGIS. We have built a 3D tiles dataset loading with lazy support. We only load the tiles when it's only lit. We can read remote files. We can re-execute in the demonstration. We have worked with GLTF support, but we have some drawbacks. It's a very basic GLTF support. In World Warf, we have worked on the fix, the reprojection from the geocentric system from 3D tiles. I'm giving more precise about what we've done about trading the GLTF files. Currently, the GLTF files are supported by QT 3D. So that's fine. There's a specificity in QT 3D. It is a same loader. It can read a complex 3D files. And it can read GLTF files. It can extract from GLTF files textures, meshes, materials, shaders. So that's great. To do that, you use a specific library, which exists elsewhere. This one, maybe, yes. We can load with this library many, many files for. But this library is not completely mapped in QT 3D send loader. So for example, we cannot load the binary file format, which is used in 3D tiles. The format is not supported. So we have to use another library to load the file format from 3D tiles, the GLTF from 3D tiles, and save them in a format which we will be able to be loaded by the same loader of QT 3D. Also there are some other limitations. Currently, the send loader generates entities from the GLTF 3D entities. But he cannot read all the meshes. So the GLTF file can be defined in the GLTF. Mainly, in the common case, we have only mesh per file, but it's not always the case. And when you have multiple mesh, only one mesh is reused. Although even if it seems the material and texture can be properly extracted, we can use it for now. So in future demonstration, we cannot prove the texture extraction. Now I would like to speak about the difficulties about reprojection from the mesh defined in a 49-78 coordinate system to the flat ground. For example, here is a Dragon from the sample, the cesium sample. If we put it in a QGIS, if we put the mesh directly extracted from the GLTF in a QGIS, we saw that this Dragon is not well oriented. We have to fix the role, the pitch and the yo according to its position on the globe. After some computation, according to the position of the object around the globe, we compute some correction to apply to it. Then we can say it can put it well on the flat ground. Here's the explanation. We take this Dragon and put it in blender. As we can see here, the same Dragon is zero centered. It has no yo, pitch, and roll rotation. It's just flat on the ground and zero centered. This Dragon is fine and we can use it directly in QGIS with the support we have provided. Here's the bad example. It comes from another type set. We saw that it's not zero centered. It has already all rotation and rotation defined. This kind of object cannot be well placed in the QGIS. It can be put flat. We cannot compute the correction to put it flat on the ground. So what we need is centered object on zero and the rotation and translation must be defined in the type set definition. We have a transform matrix and all the error must be defined. So now a quick demonstration of what we have already succeeded to do. The Dragon from Cezion, Japanese house and some data from NSA. So here's the Dragon. As you can see, the color is due to the tiles' depth and not from the texture as we cannot display texture. The more we get closer, we can see more details here, we can see the scale and we go back to the scale and we have unload the fine object to show the cursor object. As a Japanese house, it's a more complex sample, we can see that when we go closer than the angle, we load the finer tiles. When we go to the roof of the house, we can see if we can, on the more closer, we will load the tile. And if we go outside the burning box of the roof, we can do the cursor version. If we go closer to the roof, we got the finer version. And I quick do the last video and we will see that we can be able to load some kind of landscape provided by the NSA. When the closer we get, the finer we get, we saw that the tiles are loaded properly. Okay, so thank you for your attention. First for you, and now it's time question, if you have some. Yes, thank you for your presentation, Benoed. It's been really interesting for all of us. And yes, we have a lot of questions from the audience. And I will put here the first one was, are there ways to visualize in the network using the 3D visualizer filters in QGIS? What do you call in the networks? You can currently visualize 3D data. And so if you have 3D data from a building, and you can go inside the data. From the cooling system, you will not see all the outside the building if you go inside the building. Okay, perfect. Here is another question. Can the brush action be used? Now because it's defined by the 3D tile specification, clearly defined that only the 49-78 projection can be used. In the demonstration for the Japanese house and the NSA demonstration, we use QGIS as native 3D tile projection and the 49-78. We only use the demonstration with the dragon, we use the other projection, we re-project the 49-78 to 38-57. Sorry. Perfect. So we have a lot of questions. I don't know if all of them can be answered. But there is another one. What plugins do you recommend to work with 3D and QGIS? What plugins I have talking about because 3D is already working out of the box with QGIS. So you don't need plugins. And the current work I've presented is not merging the twang. Okay. It's not a table. And there is another one about the tool. It's a very good question. I don't know. We have not reached the steps. But indeed it will be nice to be able to serve 2D tiles. But this is not the purpose of QGIS. There is already a system which is able to provide 2D tiles. It's not the main purpose. But it can be a future feature. Perfect. Benoît, thank you so much for your presentation. You're welcome. So if anyone needs to contact you, you can do it through. I'll be on the chat if you like. Okay. On Benoît's. Perfect. So thank you so much. We are going to the next presentation.
|
3D data become more and more common these days. Sensors like drones or "streetview" cars produce large amounts of images used to create 3D models through photogrammetry. 3D editing software also eases the production of 3D models, especially for BIM. AI systems begin to be ready to generate 3D objects of the world. It is now frequent for municipalities to acquire a full 3D dataset of all urban objects. These datasets pose a certain amount of issues : how to visualize them efficiently ? How to integrate these 3D data with other GIS data ? How to make them available on the web ? How to manage this data with my favorite GIS software ? We worked on these issues in order to be able to manage urban data efficiently in QGIS. Based on the emerging 3D capabilities of QGIS, we explored the 3D Tiles format and implemented it in QGIS, so as to be able to visualize large amount of urban data. In this presentation, we will present our work on urban data visualization, the state of the art in QGIS 3D visualization, and future work planned in this area. We will showcase the features with a real-world example.
|
10.5446/57442 (DOI)
|
And I'm a software engineer at Azevia, which is a geospatial software and services firm in Philadelphia. And I'm going to be talking about a dataset and some tools that we've created for detection and segmentation of clouds in optical satellite imagery. So this dataset is made up of Sentinel-2 tiles that contain clouds. And we've asked some professional labelers to label those clouds. And we've released that as a dataset. According to NASA, some 70% of the Earth's surface is covered by clouds at any given time. And in an optical context, obviously those clouds can reduce the usefulness of imagery. So being able to detect and or suppress those clouds in an automatic way is something that is of value. And so that's the motivation for this dataset. This dataset allows you to train machine learning models and or test machine learning models to do just that task. So we've released this dataset to the public. And I'll be talking a little bit more about that later. But we've also developed some tools around it, including some models that we've developed. And we also have a tool for collection of cloudless mosaics at large scale that we've developed as well. So those will be some of the topics that I touch on today. So yes, the dataset that we have created consists of 32 unique Sentinel-2 tiles. And within our dataset, we have both L1C, that is what is it, top of atmosphere, and L2A, which is a surface reflectance or atmospherically corrected versions of each tile. So we have 32 unique tiles, but there are 64 total raster files within our dataset. And those tiles cover about 25 unique locations. And various biomes are represented like tundra and equatorial forest cities, et cetera, et cetera. And we also have representation from all four seasons within our dataset. And these are labels were created by our partners at the company Cloud Factory. And they were able to create these labels using a tool called Groundwork, which is a fine Xavier product that we've developed. Just to talk a little bit more about the dataset, it is composed of rasters in GeoTIV format, and the labels are vector labels. In particular, they are basically GeoJSON, but they're stored in the stack format. The spatiotemporal asset catalog format, which is a format that Xavier is excited about and a number of our products use it. It's meant to facilitate, I guess, easy interchange of geospatial data of all kinds. And all of these data are available in a freely available requester pays AWS bucket. So okay. Okay. So here we're looking at the Groundwork interface that the labelers used to produce the labels. And I'm showing you this to show you both the interface as well as the labels themselves, sort of on top of and contrasted with the actual imagery. So this is one of the scenes from our dataset. And as you can see here, it's a pretty cloudy scene. And I can toggle these labels on. See here. Yes. And so these are what the labels look like that were produced by our human labelers. This is an entire Sentinel to tile. So this is actually quite large. There's, I guess it's something on the order of 10,000 by 10,000 pixels. So you may not be able to get the scale of this tile from this view, but this is quite a large tile or quite a large image. And it's been painstakingly hand labeled by our human labelers. So a lot of data there. And we have 32 of these. So here's another one. I just am showing you this to kind of contrast it to the previous one. You'll notice that over here in this tile, we've got these puffy clouds. Not sure what the technical name for them is, but that can be contrasted to this scene that contains these kind of wispy thin clouds. So just trying to convey to you the diversity of clouds that appear in the dataset here. Okay. And here's a third scene. And again, just trying to show you the diversity of the scenery here. This one again contains puffy clouds. It also contains like some highly reflective sand that's light in color. So if you would like to access this dataset, probably the easiest way is to clone this GitHub repository or clone or visit this GitHub repository that's listed here. Because there's a file within the repository called catalogs.json. And basically that contains the locations on S3. That gives the locations of where the imagery is and where the labels are. So let's take a look at this catalogs.json file. I'm not going to go into great depth about this. I just want to give you an idea of what we're talking about here. So yes, it's basically just a collection of dictionaries that kind of relate particular imagery with the corresponding labels. And once again, this is a stack, a spatio-temporal asset catalog. I've got some links to some resources at the end of this presentation that you can access if you want to learn more about this format. So as I mentioned, we've got L1C as well as L2A versions of all this imagery. This catalogs.json file only lists the L1C versions of these. But the L2A ones, you just simply replace capital L1C with capital L2A. And that's where the L2A versions of these images are. OK, so I can talk a little bit about the cloud model that we've built using this dataset. The meat of that is contained in a file called pipeline.py. And even if you're not necessarily interested in using our models, you might want to inspect this file because it gives you a concrete example of how to interpret stack labels as well as the correlation or the correspondence, I guess, between stack labels and the L1C and also the L2A imagery. So it may be worth your time to give this a look. And so now talking a little bit more about our models, we built our models using Raster Vision, which is another fine Azavia product. And this is our framework for geospatial machine learning. So if you've not heard of it, I recommend that you give it a try. So yes, we've produced models of two different architectures using our dataset. And let me talk about that now. So the first of these architectures is an FPN with a ResNet-18 backbone. So this is kind of a fairly traditional deep learning type model. And I mentioned that because I would like to contrast it with our other model type, which is Cheap Lab. Cheap Lab is a model architecture that we developed at Azavia. It is not a deep learning architecture. It is actually pretty shallow and pretty small. It is kind of designed to mimic the behavior of various normalized indices. Like you might recall that many of the indices come in the basic structure of band A minus band B over band A plus band B, that kind of shape. See what Cheap Lab is, is an attempt to learn that kind of family of functions. It's trying to learn that family of functions, except learn it in a way that's optimal with respect to the data that are given rather than being somehow prescribed or somehow developed from knowledge. The idea is to be able to train more generalization of those kinds of indices from data in an automatic way. So just to compare results, we have this traditional deep learning architecture, the FPN in the first row, L1C, and the second row, the results on L2A imagery, both quite similar. And both with recall precision and F1 scores in the low 90s. And that can be contrasted with Cheap Lab, where the recall is kind of in the low 80s or upper 70s. The precision is kind of in the mid to high 80s, and the F1 scores are in the low 80s. So Cheap Lab, at least in terms of these metrics, doesn't perform as well as the deep architecture. But the thing that is nice about it is that it's quite small. So just in terms of on disk size of the trained models, I think Cheap Lab models are typically around 40K, and that can be compared to these FPN Resnate 18 models that I think if memory serves are around 80 megabytes or so. So just the on disk size, I think, should give you an idea of the difference in the number of parameters. So a Cheap Lab is actually able to be used on CPU instances. So if you want to do some inference at large scale using inexpensive instances, that's sort of the use case for Cheap Lab. I'll walk through some of the results that we have, and then I'll talk a little bit more about that last topic a little bit later. Okay. So just this is just showing some imagery. So this particular image doesn't exist within our dataset. So this is not part of the cloud model dataset. Neither this tile nor this location is present there. And this is just showing kind of example results of the cloud detection here. I'll go into more detail about this in just a minute. Okay. So here is that tile that I just showed you, except this time it's in QGIS. So I've got some of the results loaded up here, and I can kind of compare and contrast the results for the deep model versus the Cheap Lab model, as well as an ensemble of both. So I'm not going to be scientific about this. I'm just going to kind of zoom around and show various things. So I can show the results for the ResNet 18. And you'll notice here that it generally does pretty good, except in this particular case, it does miss, it seems to miss some clouds here in the middle of the screen. It does do a nice job of appropriately not mislabeling some of these lighter colored buildings that are down here at the bottom. So those are the FPN results. Let me take a look at the Cheap Lab results. So you'll notice here that Cheap Lab is able to detect some of the clouds that the FPN missed. You'll see that go back and forth, like there'll be instances where the FPN will see clouds that Cheap Lab doesn't, and vice versa. So they both seem to have their respective strengths, as well as their respective weaknesses. So although the objective numbers that I showed you a couple slides ago seem to clearly show the FPN is much better, subjectively it's less clear. But one thing that's nice is that one doesn't have to choose one or the other. You can use an ensemble of the two. And we did actually seem to find that ensembling multiple FPNs and multiple Cheap Labs with each other produces nice results. Much of the time you do subjectively seem to hold onto the strengths of both while not necessarily inheriting the weaknesses of both. So that's nice. So that was my Greenwich Tile. Let me show you another one. Just trying to give you an idea of how things work on different kinds of clouds here. So this one is a little bit more interesting because we have some clouds over water and some wispy clouds as well. Here's the FPN results. You see some of these clouds being missed here. The Cheap Lab results show that those clouds are being picked up by Cheap Lab. And if you look at the ensemble, you'll notice that, yeah, I mean, it does a reasonable job of finding, I guess, maybe the inner parts of these clouds. If you're wanting to make sure that these clouds are removed, you can take some buffer around these areas of detection and maybe be reasonably confident that you'll be getting everything. Or another strategy might be to take the ore of the results of the two architecture types. So there's strategies for dealing with these, but that's just kind of a flavor of what you have there. So what are the applications of some of these models? Well, as I mentioned, Cloudless Mosaic is one application. And we have been able to make use of Cheap Lab to produce Cloudless Mosaics at pretty large scale, like we've done entire continents before. And we've had a reasonable degree of success with that. Yes, the tool that we have for that is called Cloudbuster. So basically what it does is it runs, it makes use of a Cheap Lab model, again, able to do that on inexpensive CPU instances. And it uses that to basically infer clouds in a number of tiles over the same location and then basically composite those together, the Cloudless portions, to produce Cloudfree Mosaics, or at least Mosaics with fewer clouds. So here's an example. And one thing that Cloudbuster is able to do is it produces a, so it adds an extra band within the result that it produces that basically allows you to track the origin of the pixels that it used to produce a particular image. So in this particular case, this is showing you the source pixels that were used to produce this composite image. And okay, so here are a couple more resources that you can access if you're interested. If you're able to obtain a version of this slide deck, you'll be able to click on these links. These are just a couple of links to some writing that I've done about the topics that I presented. So in particular, we have a blog entry here on our Cloud model, and we also have a blog entry here specifically on the Cloud dataset, walking through how to obtain that and how to use it. So thank you. Okay, thank you very much. James, we will welcome him now to the stream. Hello, James. How are you? I was a very, very nice and clear video presentation, so thank you for that. We have some questions for you. First, can users download the Cloudmasks to use them in their workflows? Yeah, certainly. There is, you can go to GitHub and inspect the file catalogs.json. In the chat, I've also provided a link to the slides which provide a link to that file. And within that, you'll find where those live on S3, and you can simply download them. They're in a Request or Pays bucket, so you're free to get both the imagery and the labels. That's great. Also, there are specific use cases where the L1C product is more appropriate than the L2A. That's an interesting question. I don't have any firm examples to hand, but one plausible example I can come up with is maybe if you're wanting to transfer a Cloud model to a different imagery type, it may be the case that L1C is more useful for you than L2A in some contexts. Okay, thank you. Have you also labeled Cloud Shadows? No, we haven't. That perhaps is a bit of an oversight. So it's possible that we may take a second bite at this apple and perhaps produce some labeled Cloud Shadows. We're also actively doing research into how to detect those shadows automatically. So yes, that's definitely a topic that we're interested in, but I don't have anything to share with you at this time. Okay, thank you. Can the model detect different Cloud types, example, zero thick clouds? How does the prediction compare against the Cloud mask provide with the Sentinel-2 data? Okay, I'll answer the, I guess the first part first, maybe the second part first, I'm not sure. So in terms of Cloud types, yes, I think you saw in my prepared video there that I tried to show that we were able to detect both like puffy and wispy clouds over land and water. What I showed you was kind of heuristic and unscientific, but generally speaking, the answer is yes. Now when you ask how those compare to the Cloud masks, that's actually a bit of a complicated and interesting question because where a Cloud begins and ends is kind of a subjective question. So you have a bit of subjectivity in the labels in terms of interpreting their correctness, et cetera. So generally speaking in the labels, an area covered by a large thin cloud is typically labeled as one big cloud. That's typically how it looks in the labels, but the definition again of what is a large thin cloud versus just missed where that line is is subjective. So it'll be up to you a little bit. Okay, great. How well did your Cloud mask perform in ice snow regions as far as labeling accuracy? Okay. So in terms of how well the labelers did, let's see. That is, that again is a tough question to answer because I, so just by visual inspection, it's difficult for me to tell the difference between an area of permafrost and a cloud in some contexts. So I'll say that I will suspect that they did a fine job, but I have no objective proof of that statement. Okay. Thank you for that. Does the Cloudbuster model guarantee image recent site when reconstructing cloud-free mo sites? Indeed. Yeah. Cloudbuster accepts a number of parameters. So you can kind of control the window of time over which you're willing to accept imagery. So you can narrow it down and be more certain about when pixels were born, or you can make it more wide if you're just wanting to be certain that you are able to cover an area. So that's all controllable. Great. Another question is, do you use all bands on Sentinel-2 to train the model? Indeed, we do. So for the models that I showed you, that's the case. But of course, if you'd like to take a subset, you can take our data. And I think that you can modify our code to do that, or of course, you can use your own training code as well. So that's all open to you. That's cool. Did you consider augmenting your data with other multispectral imagery? Well, not multispectral per se, but we are interested in SAR. There was a couple of talks before mine that discussed that topic. That's a logical choice. So in terms of, I guess, compatibility with other multispectral imagery, yes, I think we don't have plans to produce a data set. But in terms of being able to transfer models, that seems like something that should be plausible. So in fact, internally, we've had some success doing that, not with cloud models, but with other, in other similar contexts, where we've transferred Sentinel models to other imagery types. Okay, great. I don't see any more questions for now. Thank you very much, James, for your nice presentation and all the questions and answers. I don't know if you have some final comment you want to share with us? No, I just like to thank you for your hospitality and everybody have a nice conference. Okay, thank you very much, James. Okay, see you around next days. Okay.
|
Some 70% of Earth's surface is covered by clouds at any given time, according to NASA. The existence of clouds in optical satellite imagery limits its usefulness, blocking features of interest on the ground. We have developed a machine learning-based approach for detection and segmentation of clouds, and for production of cloud-free mosaics. Our efforts include development of an open dataset consisting of Sentinel-2 imagery and labeled clouds, creation of a lightweight ML architecture for cloud segmentation, and creation of open source tools for inexpensive production of cloud-free mosaics at continent scale. Our dataset and methods are applicable to many types of optical satellite imagery, not just Sentinel-2.
|
10.5446/57443 (DOI)
|
Welcome back to FOSFORG 2021. We will be starting now the evening sessions with Javis. Who will be presenting a look at you on the edge with Visual Field? If you want to present yourself, Javis, I can proceed to the video. Thank you very much, Jose. Please go ahead and play my video. It's pre-recorded and I'll have to take questions after the video. So, yeah, thank you. Perfect. So, let's start. Introduction to Visual Field. I'm glad you're still with us, Buenos Aires. I hope to be online now to take your questions. Thank you. Buenos Dias, Argentina. I'm coming to you from southeastern Australia, home to the UN people. My name is Harris and I'm a 30-year IT professional. I'm delighted to be able to present here at FOSFORG 2021. And believe it or not, it's actually 3am in Australia right now. So, my talk is pre-recorded, but I hope to be online after to take your questions. The climate's going to need all the help it can get, GEO people. I'm going to briefly introduce an application I've developed, and I'll have a quick demonstration and end with some ways that you can get involved. But first, I have a question for you. Is there an elephant in the room? I mean, right where you're watching this, is there an elephant with you? You might have seen a lot of elephants in this conference so far, and I'm sure those elephants have really great features. I don't dispute that for one second, but that's not what I'm talking about. I mean, 77% of you are probably looking at it right now. I mean, something that supports that, what I'm talking about. That is, of course, WebSQL. Did you know that most browsers have an in-built relational database that supports SQL syntax, and it's embedded with zero installation? It's fast to load, an index? It's scalable, it can handle big data, and it's supported in 77% of the browsers. Why aren't we using SQL? Because of this lack of standard support. But that was 10 years ago. So why are we using SQL? The WebEco system has continued to evolve since 2010. We now have Core's HTTP headers and the availability of large, open relational datasets, amongst other things. So now I'd like to introduce Visual Field. It was developed, the Core of it in 2019, makes heavy use of WebSQL. It's essentially a versatile, single HTML file. 23,000 lines of code, works with the limited datasets. You can load and manipulate and visualize your data. There's no HTML or JavaScript programming, and it's an open source application built on other open source components. So you can ingest datasets or files or URL, import, export CSV, import, export workflow metadata, import, export your database dumps, so you can transfer it to another instance, export SQL dumps, and import media. Visual Field is intended to be loaded in a versatile way. You can use it as an online application, offline, as a standalone file, or as a progressive web application. It's got a series of workflows, sequences to support workflow automation, and you can manipulate your datasets using SQL, custom functions, direct editable content, and you can save your workflows as metadata. Visual Field is built on a whole spectrum of open source JavaScript and CSS libraries. So there's a wide variety of visualizations available. Table visualizations are scalable for big data. Chart visualizations are scalable up to 10,000 data points for Pi, Line, Bar, and Scattered charts up to four series. Maps are scalable up to 120,000 features, six custom or leaflet styles. Head maps are scalable for big datasets. Network visualizations support both logical and physical, scalable with an explore and walk with the network mode. You can perform shortest path searches, and you can save and reload your network graphs. Slidechase are scalable for media, and all visualizations can be incorporated into sequence workflows and combined with drill-down URL links. I'm going to give a demonstration of visual field, and I'm just going to do that by going to visualfield.org, and I'm going to run visual field online here. But before I do that, I'm just going to say that the example I'm going to show today is not going to be a perfect example, because I will lead into the next part of my talk about that. If you want to see a more seamless example, just come and look at some of the examples on the website here. So we're just going to load visual field there online. Now, as part of saying that, I don't mean any disrespect to the publishers or the custodians of these datasets, but the datasets I'm going to use here, coming from data.gov.ar. And the first example I want to look at is this dataset here, which I'm not exactly sure what it represents, because my Spanish is no good, but I'm just going to right-click on this CSV link here. I'm going to copy that link address, go back to visual field, and we'll see if we can import that as URLs. Just by pasting the URL there, and I will put in a table name, I'll just call that table 1, and just go ahead and try to import that. And you notice we get this error here. Now, this error is typically indicative that that dataset doesn't have CORE's HTTP headers. But we'll just work with this in a manual way. We can continue with this. So we'll just go ahead and download that CSV, and we'll just go back to visual field, and just take that CSV that we manually downloaded instead of the URL. We've got it here, and we'll just upload it there. And there's our dataset there. We've got our provinces, and notice this field here. We've got some sort of percentage value here. So the first thing we'll do is just see if we can show that as a chart visualization. So we'll go to chart visualization, select the table that we've just created, which is that table there. We'll call this our chart as a description, and for the title, we'll just put our chart, and the x-axis column will put the province name, and we'll order by the province name, and the chart type will select a bar chart with a fill, and for the y column, we'll select that percentage value there. We'll just leave it like that. We'll just save that, and go ahead and run that. And there's our chart there with our provinces and that percentage value shown there. But what we really want to do is show these provinces in some sort of map. So what I have found on data.gov.ar, there's another dataset here where we have some province centroids as CSVs. So I'm just going to download this dataset. Just click on there, and there it's downloaded. Go back to visual field, and we'll just import that provinces dataset. Take a look at it. So here we have some sort of centroid latitude and centroid longitude in the province name. So now I'm going to combine those two datasets by way of creating a view. Now to assist me in not having to type this, I've prepared some earlier. So I'm just going to run this, and let's take a look at this. Here I'm going to create a view, which is a simple cross join between those two datasets. And I'm also going to manually assemble by string concatenation a WKT geometry. Now I could have used a custom function, but I'll just manually assemble that string concatenation. I'll select another field as tooltip, and there is a simple cross join between those two datasets. So then I'm going to go to map visualization, and we'll select that view that we've just created, map view. And we'll call this our map, and the title, we'll just call this our map. And the geometry column will be that centroid WKT, and the tooltip I will select as that column value as tooltip. And we'll just save that and go ahead and run that. So there's our simple map that we've got, and if I click on those centroids there, you get the pop-up. If you hop over there, you get the tooltip of the field that we've just set up. But now let's look at getting some styling set up on this map. So to do the styling, there's two things I'm going to do. I'll go back to SQL, and I'll just drop that view to start with. Just drop that, and I'm going to recreate that view with a little bit more complexity to it. So in this case, I've got the WKT, the tooltip, but I'm also going to include a case expression where I'm going to translate that percentage value to some sort of HTML color. So then let's go back to our map visualization. And again to save me time, I prepared one earlier, so I've got this one called our map2. Now let's just have a look at this. Over here in the custom styling, I'm setting the fill color to a column value of PCT, which is that percentage that I just set up as part of the case expression. I've then got some other values here, radius, radius hover of 10, fill opacity of 0.9, weight of 0.5 to change the stroke color. So these custom styles are just leaflet path styles. So I've got that set up there, so let's just run that and have a look at that. So there's our map with a bit of styling on there. The fill color is based on the percentage value that we had before. And so that looks pretty good, but when we zoom into here, what has happened to Buenos Aires? We don't seem to have that on the map. So maybe there's something wrong with our join. So let's just go back and look at our tables. So we'll just browse the tables here, and let's have a look at this first data set that we loaded by browsing that. So there's Buenos Aires there. And we noticed the province ID, which is what I was joining on, has a value of 6. So let's take a look at our provinces. Let's just browse that table. So if I sort by name, there's Buenos Aires there, and we look at the ID. And in this case, it's 0.6. So perhaps our two data sets aren't quite congruent. So let's go back to our SQL, and we'll just go ahead and drop that view. And I've got a third view saved here. If I just take a look at that, here I'm going to do a bit of cleansing in that view. So I'm going to cast the join conditions to integers. So we'll do a bit of cleansing in our SQL. So the remaining part of the view is the same, but we've just done a bit of cleansing in our join condition there. So then let's go back to our map visualization and try running that again now. And so that looks a bit better. We zoom in there. We can see we've got a Buenos Aires now. So I'm glad you're still with us, Buenos Aires. So that's the end of the demonstration. So who could use Visual Field? Well, teachers, students, data analysts, GIS analysts, web developers, report writers, data scientists, field workers, emergency workers, anyone really with even a bit of SQL knowledge is helpful. But please be patient. It takes a different way of looking at things. Some people have taken a look and they don't come back. So just be patient with some of the advanced topics like browsing of objects, sequences, custom functions, drill down and metadata. There's various built-in pages and tutorials to help out in this regard. So what can you do? For teachers, students, GIS analysts, data analysts, field workers, emergency workers, take a look, try it out, be patient. For web developers and report writers, explore, try to make sense of how the metadata is passed in the URL. Try to bring your workflows to life in a web context. For authors of web libraries and frameworks, for example, charts and maps, get ready to make your frameworks scalable and responsive for big data in the browser. At the moment, Leiflet is scalable up to about 120,000 features. Chart.js is scalable up to about 10,000 rows. We need to get ready for larger data sets. For open data sets, publishers and custodians, please provide your relational data sets as set. There's standards that exist for CSV, so please continue to provide that. Include your spatial data in CSV as WKT. If you're providing spatial data sets of CSV and some object or reference to a proprietary data format, you'd need a question, really, is that an open data set? I'd encourage you use a WKT in your CSV data sets. Ensure your data sets are congruent and ensure that your data sets have HTTP course headers as part of your data sets. For publishers of web map tile images, please include course headers on your tile images. That's so that tile images can be stored in an off-line tile cache. For browser developers or those who have any input to web standards, can we either formally I'm Shanahan and I'm the chief of the......formalize another standards-based SQL engine in the browser? And can we have some rich functionality with that SQL engine, whether that's some sort of spatial functionality or whether it can have interoperability with JavaScript functions? Can we have a native CSV parser as well? So my message, really, what I want to get across is I predict one of the next evolutions of the web ecosystem will be in part about big data in the browser. And my message is not so much about visual field itself, but really, there's a need for relational big data storage in the browser. And then you can empower your end-users, consumers about yours and other data sets, and you can perform spatial analysis for those who may have nothing more than a low commodity phone and an unreliable internet connection. And there you have it. That's my introduction to visual field. I'm glad you're still with us, Buenos Aires. I hope to be online now to take your questions. Thank you. Well, that was an amazing talk. Thank you very much, Javier. Yeah, thank you for playing that. There was a slight interruption, but that was very, very good. So thank you. Happy to take any questions now. So thank you. Let's see. So far, no questions. So please, if you want to ask something, don't hesitate to post it in the questions tab. Meanwhile, do you want maybe a comment about your video or something? Oh, yeah, what I would say is that, you know, there's a lot of flux and dynamics going on in relation to WebSQL. And I'm not so much campaigning for that, but, you know, it's on the cusp of being deprecated from all browsers. There's a lot of changes happening. So I feel there's a real need for a relational data store in the browsers where the Web ecosystem is currently heading. So if you find this useful, you need to get out there and campaign with the browser developers to keep this functionality or to have it replaced with some standards based functionality to really go forward with this. Yeah, that would be a message I'd like to get out there. Definitely. Well, we have a question. You mentioned the format should be CSV. Any thoughts about WebAssembly part of the data to support more formats? Well, CSV is a it's a ubiquitous format for consuming relational style data. So it lends itself very well to being imported into tables. I'm not sure how that comes into play with WebAssembly. That's a programming language where CSV is like just a plain delimited data set. And there's a lot of CSV published in a lot of data providers these days. Yeah, it lends itself to being imported into relational data structures. I see. Thank you. And well, so far we don't have any more questions. Let's wait for a minute or so. I've no we can call it here. If it's okay with you. Okay. Yeah. Thank you very much for the opportunity to present here. Thank you very much for your talk. Sorry, we will be seeing you around in Foss4G then. Yeah, thank you. Thanks. Take care. Yeah, thank you. And then we will be proceeding to set up the next talk. Well, we are very deep before so then it will be starting a half past. See you soon.
|
Visual Field is essentially a single HTML file - with some internal links to various CDN CSS and Javascript component resources. Visual Field can be loaded as a web accessible resource, offline web resource, installed as a PWA, or as a stand alone file. It's an Open Source Application that builds upon other Open Source components and allows you to import, process and visualize your data, or other remote CORS enabled datasets. It has a potential broad audience and broad set of use cases. Visual Field isn't about solving all GIS problems but it is about empowering both the end users and designers of your data driven visualizations and workflows alike. This presentation will briefly introduce the capabilities of Visual Field, run a quick demonstration, and then lead in to a general discussion on what you can do going forward should a standards based, text and spatially enabled, SQL engine become available in the browser. My name is Harris Hudson and I am the author of Visual Field. There is both a lot of set precedence and also a lot of continuing ongoing change in the web ecosystem - and should the WebSQL database (which is at the very core of Visual Field) still have broad browser support come September 2021 - I would be delighted to present in FOSS4G 2021 BA. Whether you are technical or non-technical, I hope you might find this presentation useful in some way.
|
10.5446/57419 (DOI)
|
It's Sanji. I think a lot of people, a few of you may remember him. He chaired the Phosphor G250 in Seoul and he wrote in his normal life, he's CEO and president of Gaia3D and a really cool guy, which I really can confirm. It's really a pity that he can't join, but in Korea it's 4.30 in the morning and I said, it's okay, I'm going to present your video and that makes me a little bit sad after we had so many questions in the talk before, so we won't be able to answer any questions. Maybe he wakes up and comes in, but I just start the video and enjoy the talk from Sanji. Good morning and good afternoon everyone. My name is Sanji Shin and I am from South Korea. At first I would like to ask you understanding for my recorded presentations. Here in Korea is now 4.30 a.m. early in the morning, so I have to record this presentation earlier for this talk. Today I'll be talking about the EIA, Environmental Impact Assessment Data Visualization Technology using Phosphor G. Okay, let's get started. However, before start my presentation, I would like to turn off this video. I'm not sure how many of you are already familiar with EIA, Environmental Impact Assessment Process. There are several EIA definitions. Here I got two definitions from Korea and the United Kingdoms. According to Korea Environment Corporation, that is a state or the public company in Korea. According to them, EIA is designed to review, predict, and assess the potential environmental impact of a project, people whose plan is finalized in order to come up with measures to prevent or minimize any possible adverse impact that the project could have on the environment. Also, digital EIA project partners from UK gave similar definitions like below. The main core concept is that we need to examine the adverse impact to the environment before development project, that is EIA. EIA is adopted in many countries all around the world and regulated by each country's regulations. Usually, EIA has several key stages like below, screening, scoping and the baselining, assessment and predictions, submissions and consultations, and finally decision and monitoring. Actually, each stage has many steps to be done. There are lots of things to be done. Also, if you have much interest in EIA key stages, you can see my slide later. This talk is about the research project that how to innovate EIA process with cutting-edge digital technology. I'm going to talk a little bit about the problem of current EIA process. One of the main problems that current EIA statements are quite huge in terms of pages and contents with many unnecessary data and information. At the left side, you can see the real case of EIA statements from Korea. It has more than 1,800 pages excluding appendixes. Right side, you can see the UK cases that they usually produce more than 4,000 pages long statements. It's ridiculous. Producing this kind of huge statement is quite a buzzer. And the current EIA statements are quite paper document-oriented, so it's quite static without any interactivity. It means all the assessments and prediction data are stopped and embedded only in documents. At the left side of this slide, you can see the noise predictions map with 2D and vertical profiles. However, it doesn't give much information to the people who live in high buildings because 2D map shows the noise predictions on the ground level and vertical profiles shows the noise propagation at a certain point. If your house does not fall into that point, you can't realize how noisy your house may be. Right side, P&Tank distribution map also shows the predicted data on the ground level as well. You know, air mass is three-dimensional. So with this kind of current EIA problems, we may ask these kinds of questions to ourselves. What if we can see the assessment and predictions in an interactive way? What if we can change the development plan by and want to see the result in real time? What if we can interact with other stakeholders through one single EIA platform? What if we can visualize BIM and 3D GIS data on a web browser? These days, building information modeling is getting popular and popular, so if we can integrate those BIM data in the EIA process, they might be cruel. So these are the motivations of this research. We needed to change document-oriented EIS statements into dynamic and multimedia-based EIS statements. We needed to make EIS statements more accessible, readable, understandable to non-experts. Current EIS statements are so full of jargons, technical terms, scientific numbers, so you can be suffocated. And if we can increase the citizen participation throughout the whole EIA process, that will increase transparency around the EIA. Also, if we adapt the cutting edge digital technology to EIA, we may enjoy many benefits like real-time modeling, real-time simulation, and easy communication. This research project is National R&D Project funded by Ministry of Environment South Korea. It is a five-year long project and many organizations are involved in this R&D project, including National Research Institute, universities, and companies like my company, GAYASVD. And on the parent R&D project called Decision Support System Development Project for EIA, my company is carrying out mainly visualization part. Over the research structures like this and my company together with the other company, we are in charge of radar side development, opposite. And as I already mentioned earlier, my company will be responsible for the visualization platform mainly. We are currently working on beam-dised visualization and EIA data visualization. This talk is part of this research outcome over the last 16 months. Actually, this research project started last June, last year, and then over the last 16 months, we have carried out several researches. This is a conceptual system design for the EIA data visualization system designed by Mike Lee. This will be a multi-layer based system, including data layer, processing and business layer, application layer, and service layer. We hope by mixing up and combining components in each layer, this system could easily create another service. This is also a conceptual component breakdown, how to implement some of analytical functions for the EIA by combining and chaining atomic process using WPS OGC standard where processing may reduce the development time and cost. And we expect increased flexibility. This is a proposed system architecture for our research project. We have a plan to make use of many numerous open source projects like post-GIS, geoserver, georectrash, and MAGO3D. And we hope we could release all the outcomes as an open source project later. Actually, when we submitted our application to Ministry of Environment Korea, for this research project, we already proposed that we will use open source and we will share all the outcomes as open source. We are still waiting for Ministry of Environment's final decision. So why visualization matters? This famous picture shows the importance of visualization of data. 13 data sets in this picture have the same X-min value, Y-min value, same standard deviation, and correlation. But with a total difference of spatial patterns, sometimes descriptive statistics lose the trends and patterns of data like this. And EIAs take much full of data. However, those data don't say many things to the ordinary people with just numbers. Usually, there are analyzed data only. Effective data understanding includes not only data analytics, but also visual storytelling, data visualization, and the real-time interactions. So importance of visualization can be summarized like this. First, we can instantly know what big data sets. Second, no expert can understand the minimum data more easily. Third, we can catch the trends and the patterns of the data. Fourth, we can easily share the minimum data with other people. Fifth, we can increase the data usage. Here is a simple case that interactive noise visualization helps understand noise propagation of a moving train. This research tries to implement this kind of visualization and the simulation for the stakeholders to increase the understanding around EIA. So now on, I would like to share some of our research outcomes. More and more developers are trying to use BIM over their development phases. And if we can integrate those BIM data with the EIA process, that will help us understand the big picture of the project in more real-time. Now you can see the detailed BIM data of the power plant. Also, we can see the phases of the constructions like this. We also can visualize the wind and compare the people and the abstract constructions like this. Now you are seeing the wind data. Left side is the people construction wind data. And right side, you can see the abstract construction wind data. This shows how the developed plan can change the wind flows and directions. Actually, you know current EIA statements show this kind of wind changes as just wind loads or wind speeds only. That does not give the real implication of wind change to non-experts. Also, we tested air pollution visualization as a heat map with the iso-line. Let's take a look at this. You can see the air polluted point and then you can read the value of the air polluted points, PM2.5, PM10, and then how their distribution is over the surface. Also, using a sensor things API, we can read the time series value and we can display that as a graph. This is the data collecting point with the iso-line. We use the GDAR for interpolation and to create this kind of iso-line. This is the hydrology visualization case and you can see very interesting our test. Let's imagine that what if non-experts can participate in this kind of simulation and can give feedback to developers and the government. This is the kind of trial to implement real-time simulation EIA process. Users can see how the water flows and then the users can move the locations of banks and see what will happen in real time. Now you can see the buildings and then the users can move their buildings around the stream and how the kind of development plan affects the water flow. It's a very interesting case. This is another interesting visualization case with oil spill or water pollutant flows. Let's take a look at it. Also, we can move the bank like this and now you can see the red stream. That is the oil spill. Let's imagine that that is the oil spill or water pollutant flows and then by moving the object or some object over the source of the oil or water pollutant, you can see how the water pollutant flows and how they affect the water quality and other things. By collecting the moving data locations, we can put the kind of coordinate to that object and then we can simulate how oil spill flows and how water pollutant flows and other things. Okay, let's wrap up now. As you know, current EIA statements are quite paper document oriented. So simply to say, very old-fashioned and current EIA statements are hard to navigate, hard to read, and hard to understand with full of jargon, with full of technical terms, and with full of scientific numbers which make public inaccessible to the real meaning of statements. And current EIAs are falling behind other industries that harnessing digital technology to drive productivity. So we need to innovate in terms of EIA process. So this project, this research project is an attempt to increase transparency, accessibility, trust, to interactions among stakeholders around the EIA processes. Also, it is expected to give a new tool to stakeholders for understanding EIA process by developing new ways of EIA data visualizations. So actually, as you saw by a presentation, we've tested many things and we tried many things and we got several very quick outcomes, but still long way to go. And this is just an initial stage of this research. Okay, thank you very much for your attention. And I would like to also thanks to the Ministry of Environment Korea for funding this wonderful research project. And if you have any questions or inquiries, feel free to mail me. My email address is below, shshinatpiasri.com. So if you have any questions and inquiries, just mail me. Thank you so much and have a nice post 40. Thank you so much. Thank you, Sanji. This was really an impressive, impressive talk and very cool examples. And I'm quite sure we had a big flood in Germany. I was really surprised what you already did in that case. That really could be interesting for them because the flood came to places where it never been before and nobody thought about that. And
|
This research is about the development of an EIA decision support system that effectively integrates and visualizes the results of the EIA review process and related information such as BIM/GIS, modeling data, and sensor data. The final goal is to improve the EIA process so that not only experts but also non-experts can participate in the EIA process and easily understand the EIA statemnets using innovative technologies such as 3D GIS and Easy Finger real-time simulation. The final system will be developed and opend as an open source. This research is 5 years long project funded by Minstry of Environment, South Korea. This talk will focus on the 1st year's research outcome and future plans.
|
10.5446/57420 (DOI)
|
Welcome back to our session, the academic session on Thursday. Our next speaker is Jessica Moll. I think I'm pronouncing correctly. From the Oak Ridge National Laboratory, Jessica is a research scientist in human geography group and she specializes on spatial data analysis and modeling and data fusion. She uses these techniques to combine and analyze large volumes of data to facilitate high-resolution population distribution modeling for Lansk and USA and Lansk and HD. Welcome Jessica. I'm going to add your presentations while streaming and I'm going to let you present. I think we've all muted. No, no, no. Okay. So we're good now. You can hear? Yeah. Okay, thanks. Hi. Like Christina said, I am Jessica Moll. I am at Oak Ridge National Laboratory, which is in Tennessee, which anytime I go to an international conference, everybody is like, oh, Jack Daniels. So yes, that Tennessee. But I wanted to talk to everybody about our vector analytical framework that we use for population modeling. So it is something that we have been using here for our gridded population modeling. And this presentation is about Lansk and USA. And that's what we developed this framework for and where we're using it extensively right now. We've been doing it for a few years now that we use it. So I'm going to give you a background on what Lansk and USA is. It is a gridded daytime and nighttime population estimate. It's a three arc second resolution raster data set. So there's a daytime raster and a nighttime raster. You can see in San Francisco on the right hand side, there is a big difference in that population day and night in the downtown area. So these gridded data sets are used by the US government. This is a foundation level data set that is funded by the government for planning and response. So resiliency, emergency readiness and response and recovery efforts as well. And we build this with another data set that's an ORL produced FEMA funded building footprint layer. It's called USA structures. We also use the high filled critical infrastructure layers for some of our component models like schools, prisons, things like that. And we also have commercial parcel data that is being used here. And those are the big data sets that we're using. And the map kind of in the middle of the screen here is a dot density population in the building. So we have that we're using underneath these gridded raster data sets. We've started modeling at the building level. And we can start making these cool maps of dot density population. So we're pretty excited about it and wanted to share the guts of what we've been doing here. So just some background on desegregation, aerial interpolation. I've seen a couple of the talks here at the meeting about desometric modeling. So it's something that is definitely being used outside of our group. But the concept is essentially taking the source zone known population. So a lot of times this is an aggregate number from like a census in the country. But we have a source zone in this example that is workers. We know that there are 90 workers in the census track. We're trying to map those down to the grid cells on the right hand side. And you could use something like just straight proportional population to those building footprint areas. And you'd get the top right model. Or you could do something like a multi-class desometric method where you include something like a parcel land use classification where it says, hey, this building is commercial. And then you as the experts say, we only want to put workers in commercial buildings, not residential. And then you get something like the bottom right where no population ends up in the residential. So that's the basic outputs of the models that we're running is this method. So we have been doing this even before my time here at Atto Gridge. And we still use this in some other population models, not with Length Chem USA anymore. But we have a raster processing workflow. And that's always been the faster way to do it than a vector method. But that's getting less. And that's kind of where we got to. And we decided to build this framework. So the base element that we have is this building footprints that are extracted from roughly one meter imagery. And we take that, aggregate it to a five meter resolution. And you've got this one to 25 value raster. And then any other data sets you want to use, so parcels, the schools that I talked about, census boundaries, you have to grid all of those, rasterize those on whatever value it is. And then you get this big stack of rasters that are all at that same resolution and are aligned. And it's really difficult to think of those little slices as coming together as a building. You're not really modeling at a building level. You're modeling at that cell level. So this thinking about modeling at the building level was where we were. So just to show you the limitations of rasterizing something to the parcel data, very rich data source, tons of attributions, land use is a primary super useful one, different. But there are tons of variables there that are going to be super useful. But if you rasterize that, you've got to rasterize it for every variable that you want to use, you would have to have a separate raster layer to use in that raster model processing. And when you rasterize something like this one, you can't link back from that original raster necessarily to all those other attributes. You also get some of those parcels are really small. How do you pick which one rasterizes? There are lots of problems with this method. So just to show you some of the data that's moving underneath us this whole time as well is we've got these NAEP imagery based. So that's a US government standardized data set of imagery for every state. We have a CNN convolutional neural network output building extraction from that. We have a subsequent method here with that USA structures data set that is regularized. You can see some of the ones on the left hand side are oozed together and you get some of those buildings coming together. So we're seeing improvements in these input data sets happening underneath us. So we really wanted to be able to build our population modeling framework so that we're getting better as well. Just to give you an understanding of the impact and the scale that we're processing here, the result of this framework for the United States is 270 million plus rows of building parts. So we've taken those 123 odd US structures buildings and run them through the framework with 152 million parcel polygons, 11 million census blocks, and it ends up being 65 million unique grid cells that are also embedded. It's actually a whole lot more than that at that three arc second resolution that get generated. But all of this is stored and calculated inside of post-GIS. In terms of impact for how we've seen this be super useful is it has changed how we can think about and prioritize what we want to do in our models because we move the heavier computation all the way to the front of what we're doing. You take all of the data that you have, you run them through. Now you have building parts, you have no loss of information at that point. So every polygon that we have is uniquely informed or evenly informed so you can always link back to the original data sets. So any questions that we want to ask of this data, it's really easy to get to an answer. Where in that raster based workflow, the heaviest computation is right near the very end and we were dealing with these five meter rasters that might be huge. And I can remember working on Texas in that five meter raster and it was a lot of data and you can cube those up but it's still quite a bit. So if you want to iterate, you want to do something different on one of your other data sets, you got to re-grid all of those things. So this vector framework, those 270 million rows become the basis of all of our other workflows. So things that we do with that are handle overlap and I'll show an example of that later. Interpreting any confounding land use information, this is something that just becomes so easy to do when you have that data table. You basically have taken away a lot of the spatial parts of it. You're now in a statistical analysis framework, so 270 million rows is not terribly slow to compute. Some other things we do is imputinals and then doing zonal stats is super easy in this because we've embedded all of that information that you need to get any summary that you need. So I want to walk through the processing scenario that we have kind of landed on. It's all in, everything's already in Postgres. We were doing that before and that's kind of how we got to being like, well, everything's pretty fast, indexes are great, let's process in Postgres as well. So in the scenario that I'm going to show you, we've got census blocks, we've got parcels, we have grid lines, and you could really use any other data sets that you want. I mentioned schools, so we have some school boundary polygons or college boundaries, prisons is another one. So any of these things that you want to throw in and that you might want to aggregate back to later, you can throw in at this point. So the very first, second step, you throw all those things in. The step two here is you union all of the polygons in each of those layers and dump them, dump the lines. So interior rings and exterior rings. And we figured out through these different iterations that throwing the ST subdivide in at this point is really helpful. I think that we split it at like 10 vertices. So Postgres, PostGIS is going to churn through rows really quickly. So if you can make your geometry simpler, it's going to be beneficial. So moving along to the next step here, so I don't know if you guys have used ST split or anything like that, but if you have multiple polygons and you split another polygon, you're going to get duplicate records on that building. So if you took a building and split it by two of these parcels, parcel A and parcel B, you would get back each side of that split line multiple times. So you get duplicates. So to get around this issue, that's why we built that blade layer first. And then we linked up all of the blades that intersect the building. And we union all of those. So then you get this nice cookie cutter action. And it's set up to be just a row for each building geometry, a blade geometry that's confined to just that. And then it's really easy to burn through a bunch of rows of that. So step four, you have a table now. So that building one in this example has been split into 10 rows and building two didn't get split by anything. So it doesn't automatically show up in that table. So we join it back in here. So then all of the, you know, it's a building part. It's just the whole part. And that happens quite frequently, especially if you don't have something like the grid lines in there. When you run this without all of those extra grid lines, which we only need because our output is in that gridded row column, that, you know, three arc second grid. So we burn that in here. But a lot of times you do have buildings that are fully within all of your other zone data. So in this step, we actually create a point within. It's either the centroid or an ST point on surface. And then we join back the census data, the parcel data, the building ID. And then we have the row and column that are also generated off of that. So then you have this table that has all of the geometries that you need. You know, sometimes you might just need a point. Sometimes you might need that full geometry to calculate area, for example. That's something that we will use. And then you have the parcel information that you can join back now easily and post for us to grab any other information on that data set. Apologies on the coughing here. So this is a real world example of a shopping center in East Tennessee. And that building is split into four when you just split it with parcels. So this is a Walmart, which is a big box discount store in the United States. And then it splits into 29 different building parts when you include the grid blade. And the highlighted in yellow record is down on that left-hand side. So you can see that we're storing these things like the unique ID, the building ID, census block, the parcel ID. So we actually use an array here so you can have multiple values. And that is helpful when we have that overlap that I was talking about, the land uses. You know, when you're getting those from, you know, maybe multiple parcels or from another data set, even you could aggregate into that. We've got the area calculated for each of these parts and the point and polygons geometry columns because you can store both of those in Postgres in the same table, which is super great, in the grid row and column. So is this overkill? You know, it might seem like it. You know, a lot of spatial analysis, you would just do a centroid of the building and join it. And you know, you can look at this example. So if you made a centroid of this building and joined it to the parcels, you know, it intersects four parcels. You know, so which one of these land uses are you going to get? Which one of the parcels are you going to assign this building to? So, you know, there are obviously some cases where, where that centroid is insufficient. You know, there are plenty of places, you know, most, most buildings are going to be single family residential. They are going to fall within, fully within a parcel. It's going to be sufficient to do a centroid on those types of things. But you can't really measure that unless you've split everything up. So, and as an example, to try to quantify kind of how much of that happens, we calculated the building parts to building ratio. And this is excluding that grid line. So this would, this would be the four buildings, building parts to the one building ratio for this graph. A lot of places like in New York, you know, as this chart goes up on the, on the bottom axis, you see the Queens County, New York was three something. And I have an example of that one on the next slide, but it's very clear that in, you know, at the county scale, even a lot of these are, and would be really insufficient to use a centroid. So, just to bring that home, the 38 buildings in this tract in Queens, New York get split into 492 parts, just when you're splitting by parcels. So there's, you know, 10 or 15 different land uses there. A lot of them are getting, would not be getting mapped if you did just a central join here. And then if you're rasterizing this, you're going to get some distortion in the relative distributions of land use. You're going to lose some of those land uses when you do that in raster. And this vector and a local frame framework just lets us keep all of that information. And you don't have to make any of those decisions. So in terms of processing time, it's not exactly linear. You know, you have a lot of, you know, Postgres' query engine is going to, you know, put a lot of emphasis into rows, and the weights have gotten a ton better on the PostGIS side, you know, actually helping the query engine cost some of those queries a lot better now. You know, so that's another thing about having it written in SQL as Postgres and PostGIS are getting better. Our code doesn't have to change, and we're getting speed increases underneath. But you know, states with larger extents or more population, which is seen in more blocks and more parcels, those places are going to be computationally more expensive than small, small, sparsely populated places. But there is a fairly good linear relationship. So and like I talked about earlier, you know, any of this heavy processing we do is up front and not at the end. And that's one of the biggest advantages. So you know, our machine here is pretty big with 128 cores, and that just lets us, you know, spin out and throw several states at it at once. You know, tons of RAM, so you're not falling over to slower mediums. You know, we've run this in a lot more constrained environment as well. And it just, you know, you can throw less at it. So it's not like it takes all of that to run one thing. So we have those building outlines, the parcel data, and the census data sets that are the bulk data sets there. And then the three arc second grid line blade is pretty big as well. So just to talk about some of the applications that I talked about. So things that we can get into and dig into in the problem space so much better because we have the data the way that we do. So when you think about overlap in a data set, like a parcel data set, sometimes this is going to be real. So you have a condo, a condominium in the United States is, you know, a small building that's owned by the people, but then the land is owned by somebody else. So you get some occasions where the parcels overlap and that's a real occurrence. Sometimes you might have two parcels that overlap each other on the edge, just because the topology isn't good. And that would be like a misalignment error. So if you're trying to fix or understand overlap just in isolation of that layer, you know, you're going to deal with all of those cases and try to figure out what that overlap means in your model. And sometimes it might not matter at all to your model. You know, if the two edges of your polygons are overlapping, you know, now you're trying to think about that overlap, but they're not running into the building at all. You know, so there are lots of occasions where having all of that information together keeps you from having to deal with problems. But so in Washington, D.C., 16% of the building area had multiple parcels that were overlapping it. So this is a fairly big problem in some places. In the state of Indiana, much more rural agricultural state, there is very little overlap. Oregon for some reason seems to have a lot. But because we have no information loss, we can iteratively go through and test and make different decisions. So one decision we might make in that condo example is, you know, if the land use on the condo is residential, but then the land around it is a much bigger polygon and it's commercial because a property management company owns it, you know, we can implement a rule that says, hey, give us the land use of the smallest parcel, you know, and, or we could say, give us the land use that is residential. And you can implement those different decisions and iterate through, see what happens when you populate places with those different rules. And it's so much easier to work through some of those, come up with the best solution instead of what's your intuitive answer. And now let's generate that. So, you know, it took a minute or so to calculate over those 270 million records at the tract at the county, at the state, you know, you can really start quantifying these things very easily. Another example is something that seems easy to think about, but isn't, especially when you think about it and how you would do it in a raster framework, you know, we've we solved this problem in a raster modeling environment. It's much more difficult to implement, think about, explain. In this case is where we have a college, we have another college that also occupies part of that campus. So, one of these buildings, the darker green, maybe it's a little hard to see, is, you know, sharing college students. We don't think it's dense, we don't think it's populated twice, you know, it's not as, it's not like it's twice as dense. So with this framework, it's really easy to say, hey, this building shows up in multiple colleges, let's reduce the densities for each of those colleges, so that we're not overpopulating this building. So just in conclusion, we're using this for population modeling, but we really think that that vector framework is extensible to other problems. You know, it's really been cool to have all our data in Postgres, in PostGIS, and then also processing there. We think that, you know, this type of thing is a similar processing workflow, but not at the scale that most people are used to dealing with. And I think that having your data in kind of this more traditional data science structure brings in a lot of new, better techniques for that. So with that, I will take any questions that you have. Thank you, Jessica. Very nice presentation, very interesting data set that you're building there. I just go towards the questions, see if our audience has any. Might remind you that you can probably contact Jessica on her email that I put on the banner. If you have any additional questions that come to the answer during this session. We still have a couple of minutes to take some, so please go to the questions tab and try to, okay. So we have our first question. Is the pipeline open source? We haven't released the code yet, but that's something that we're interested in doing. It's all written in SQL though, it's pretty straightforward SQL code. But we have to go through an internal review process for releasing code, which is fun. So we can work through doing that. But the method at least is hopefully interpretable pretty well. But no, not right now. The data set, the final, the polygon data set isn't, but the raster data set is open and can be accessed and all of those links are in the paper that accompanies as well. So the building footprint data is also open. And there are citations there in that paper as well. Thank you, Jessica. We have another question for you. Have you ever come across a case of disaggregating population on segments of street networks? Or do you have any suggestions based on that? I have a colleague at Oak Ridge, James Gaborty, who I believe has done a lot of that. So he's actually just joined us at the lab recently. So I have not done that directly, but that is something that I'm familiar with. And I think it's something that could be done in this framework as well. How is your tool different than US EPA, the symmetric mapping tool? I'm not sure, I'm not directly familiar with that. I mean, the symmetric part is not really any different. I mean, the weights and things are not what we're talking about here per se. This is just about creating that data set to do that analysis. There are lots of variables that are unique to each and every day's metric mapping techniques that gets used. Because all of that is heavily dependent on whatever ancillary data you have. So I think that that's the best answer I have for that. Thank you, Jessica. If there are any questions left, we have to proceed to the next speaker. Jessica, thank you for coming. Thank you for presenting and hope you have a nice, wonderful post for G. Yeah, thank you. Thanks everyone. Bye. Thank you. Bye-bye. Bye-bye.
|
This approach has several shortcomings due to limitations of raster data formats. When compared to vectors, the other common geospatial data format, rasters are less precise, hold less information, and are less conducive to smaller area constructs, such as building outlines and parcels, and are less accessible to the broader scientific community because of the special handling required. Given these shortcomings, we propose a vector analytical framework for population modeling. The framework is designed to combine all of the lines defining the input layers so that fields enclosed by those lines (i.e. polygons) are uniformly attributable to each of the input layers. This richer data stack allows for the development of models with more complex logic that are straightforward to implement and explain, as well as increasing the accessibility of modeled estimates and intermediate layers to a broader audience.
|
10.5446/57422 (DOI)
|
Hello, hello. Hi, Andrea. Nice to see you, although we are a little bit distances, but I'm sure you prepared a wonderful talk. I'm not sure whether I have really to present you to the audience, but Andrea is one of the core developers of the Duo Server project and everything around this. And I'm quite sure that you're going to show really interesting talk to us. So it's your stage. Thank you. So yeah, I'm going to talk about adding quality assurance to Open Source Projects, drawing from my experience adding it to GeoTools, UFKS, and Duo Server. First, a quick shout out to my company, Duo Solutions. We have offices in Italy in the United States, customers worldwide. We are a strongly technical organization with 25 engineers out of 30 collaborators. We support a number of Open Source projects, including Geo Server, MapStore, Geo Node, and Geo Network, and provide deployment support, customized solutions, professional training, bug fixing, new features, and whatnot. We also believe strongly in Open Source. That's why we are here. We believe strongly in Open Standards, and that's why we are an OJC member, and we support the standards critical to GeoInt. Now, let's get into the presentation. Some definitions. What is quality assurance? Wikipedia says it's a way of preventing mistakes and defects in a manufactured product. What does it mean for software? Well, it means doing a good design, testing, having uniform code, reviewing code changes, and more, much more. Why do we want to use automatic quality assurance? Why did I push on it? Because quality assurance is a lot of work, and it's a lot of tedious work. So we need to automate as much as possible and spare humans for the more complex task, leaving all the minutiae to robots, which can do thousands of simple checks in minutes, and also reduce the personal conflicts. Because when you go and start pointing at formatting issues, leftover variables, methods that do nothing to people, they start pointing the finger back at you and saying, oh, the QA guys mean to me. If instead it's a robot doing that, well, at the very least, they are not pointing the finger at you. Now, how is it done in open source projects with continuous integration, with testing on different platforms for formatting checks and static analysis checks? And I'm going to cover them one by one. Now, how did we get here? How did we get to the point that automated quality assurance was a need rather than a plus? G-Tools, G-Server, and G-WebCache are three large interconnected Java projects. Each one of them is split into many sub-modules. In the beginning, there were a few tens of modules on each product, but they grew over time. Each module was maintained by one developer, at most a few developers. So each one was like a king of his own little castle, and there was little overlap. The QA level that we required back at the time, 15 years ago, 20 years ago, it was, yeah, it tests on test. It compiles. The test pass, yeah, good to go. Let's go. Release. Overtime things changed. The business model on which these projects strive is maintenance and future development contracts, which means that in terms of maintenance, people providing maintenance have to provide services for any module, not just the ones that they are very familiar with. So it broadens the scope a lot. Right now, today, I basically have to be aware of more or less all the modules in G-Tools and G-Server, which is well over 200. And it's starting to stretch me very, very thin. Also, features, we sell development of features. So we get more and more and more modules. However, the number of developers is more or less the same. So there is more work for each developer and more code sharing. We assisted over the years to an operating system polarization. At first, 15 years ago, we had several active daily developers spread on different operating systems. So we had a few on Linux, a few on Windows, a few on OSX. But over the years, slowly but steadily, all the active daily developers gravitated towards Linux, leaving almost no one on Windows and OSX. Having projects that lived this long, we are talking around 20 years, means that there is a wear and tear in the development group itself. The options are you either make it your day job, or you allocate a total amount of time, and don't go beyond it. Otherwise, you burn out. However, we are losing bits and pieces along the way, because changes in job, changes in family life, and so on, exhausting in general. I recommend you to read when you have a minute, the few that tire the open source code. What happens when a developer leaves is that the remaining developer have to maintain stuff they did not write or are familiar with, which poses certain problems. Companies are being successful at selling services around the GeoTools and GeoServer, which means the business grew, and more developers were hired into working into the project. And this comes with different allegiances, because the maintainers are typically tied to the projects, the old school developers that started the project. But new hires are tied to the company. So the focus is different. You go from the project is important of the maintainers to I need to finish this task to get to my next task, which is the focus of the new hire. Also, they got lower experience in the project and its ways. Also, GitHub has been also, at the same time, great and a disaster. It's great because it's really easy to fork, change, and make a poor request. It's a disaster because it caused a lot of drive-thru contributions. That is, people that come in once do a performance change and disappear. The problem is that it means that they don't maintain the change of the new feature that they donated over time. It falls back again on the core developers. And believe me, we are pulling the short end of the straw, because if you have read the Mythical Man Month, you know that maintenance of code is actually more expensive than the creation of it. To sum up, we have a small amount of developers that are doing lots of work. They need a code that's easy to pick up and understand for the common maintenance and support services, well-tested code, assurance that changes are not breaking on platform they don't own, on data sources they don't have, and skip all the small checks in the poor request reviews and concentrate on the big items. And that's where Automated QA shines. So the first step has been continuous integration. At the beginning, we had Jenkins. And we have had Jenkins for a lot of time. Jenkins has been building the code, assuring it compiles and is passing tests for a lot of time. And at the beginning, at the times where we were using SVN, it was working great. However, and yeah, it builds multiple platforms. It cascades the build. So if you make a change in GeoTools, it also builds a web creation GeoServer. It tests for GIS, Geopack, and MySQL, and so on, and sends a notification mails on failure. Now, this is not enough in 2021 open source, because the builds and notifications, they all happen after the merge. Drive-through contributors at that point are gone. You either have to go and chase them or fix yourself whatever issue might have happened, which means that the maintainer, which has also a job and a family, has to stop whatever it was doing to pay attention to somebody else's work because it's breaking the build and you cannot have a broken build. So enter GitHub pull request checks. They run on the changes of the pull request and maintainer won't merge the pull request until all the builds pass. It's a great point to run as many checks as possible to make sure that there are no surprises. So we do that. We build on different operating systems to reestablish this support for multiple operating systems in a stronger way. We test on different Java versions. We do integration test over various data sources. You can see SQL server, MySQL, Oracle, Post-GIS, GeoPackage, and so on. And we also do automatic QA test. So this helps a lot. When all this battery of test is green, there's a very good chance that the change is not introducing anything bad. However, there is still the pull request review to do once you have everything green. And there can be lots of noise. And this picture is kind of a metaphor of what you might see when looking at a pull request. There might be lots of many issues that might be hiding a big issue. I'm telling you that there is a big problem in this image, but it's very difficult to see at the very least. I don't see it. Maybe you are better than me. So what is the problem? Well, for once the typical pull request review UI goes straight into detail. So it sets you and your mind into analytical and detail oriented review because you see changes line by line. And you are very much affected with anything that changes the lines of code. And it gets harder to see the bigger picture. The number one killer is code formatting issues. People reformatting code during pull request to suit their preferences or just the automatic settings of their IDE, it changes so much code that it's really difficult to find out what actually changed the substantial changes. And this is both frustrating and time consuming. We kept on telling people, please do not reformat. Please do not reformat. It didn't help. So in the end, we adopted an automatic code formatter that has one valid formatting. There is only one way the Java files we handle can be formatted. Everything else is flagged as an error. We use it for both Java and for the Maven build files. Now, why is this good? There is a code formatter for Python called black that gives a very good argument about why it's good to have only one way to express the code. You have speed and determinism. You will save time and mental energy for more important matters, the ones that are not about placing the code in the right way. Blacken code looks the same regardless of the project you're reading. So uniformity, which means it's great for sharing code. We don't have personal styles spread across different modules, which, again, is important for all these support contracts where you have to go and look into code that somebody else has had to write. But more importantly, it makes code reviewed faster by producing the smallest possible diffs. When you have only one way to format the files, the only lines that are changed are the ones that have substantial changes and not cosmetic ones. So you can focus on what matters. Another very nasty source of noise is dead code. People, developers, they write code maybe with distractions, maybe under pressure. For a variety of reasons, they go back and forth. And they leave in the code, commented out sections, unused variables, unused methods, unreachable code. And when the reviewer looks at them, he or she tries to understand what they do. And with Newsflash, they don't do anything. They are just extra cognitive load for the review today and for the understanding of the code tomorrow. But tools can find them and flag them for you. Also, when you get a poor question that has a substantial number of dead code, typically it means that something went wrong during the development. And there was a lot of back and forth. Maybe the developer was in over his head. Maybe he was rushed. Maybe he was distracted. Whatever the reason, it typically implies that there is other issues that, in the poor question, extra attention has to be paid during the review. Other noise is all the little obvious bugs that there can be that you have to concentrate to verify whether or not a division by zero is even possible or not when tools can do these verifications for you. Null pointers and closed resources in proper synchronization and so on. There are tools dedicated to do exactly that. Why waste a reviewer time when a machine can do it for you? So we use a number of tools, including error prone spot bugs and PMD. They all take different approaches at validating the code. And they tend to find different problems in the code. All these is enforced by automatic poor request checks. And we got failures and error messes telling us, oh, look, that method is not used. Oh, look, this double check at the lock-in is not properly implemented. Oh, look, there's a potential for a division by zero and so on. And they can also be run locally. So if you wanted to run them before submitting a poor request, you can. Although, personally, I never do that. After a while, after keep on getting the same suggestions over and over, you learn. And you don't end up doing the same mistakes anymore. So it's also a teaching experience. Now, let's say that I added all the automated QA to that poor request. And I started by eliminating the common bugs. And then I eliminated the dead code. And then I eliminated the changes due to reformatting. And oh, a bear appears. What's the bear? The bear is that synchronization issue that can cause a deadlock. That's a recursion that goes in Stack Overflow. That runaway data structure that makes the program go out of memory. That big issue that a human is best suited to catch if they only can focus their attention on it. But they can if you remove from the table all the little things, all the details, all the noise that the automatic QA tool is designed exactly to review. So the level of the review has gone up by applying automatic QA tools. Now, there's a catch. It's very easy to start with an empty project and a battery of these tools and just apply them from day one. And you are always in a good shape. What if you have a 20 years old project and you want to apply these tools? That can be really painful. It took me the good part of a couple of years to apply all these tools to the projects. Because every time I apply one of them, I have to make sure every check passes. So I basically have to fix the whole code base, which is millions of lines of code. So we did it little by little. The first step was reformatting the code base. We had months of discussions about which tools to use, how to configure it, blah, blah. And once we agreed, reformatting was a matter of calling a command and then setting it up automated in the poor quest checks. Then we had an organically growing set of poor quest checks, adding more coverage for more OSs, more Java versions, more integration tests for databases. And I would like to thank both Brad and Bart, which have been both maintaining and adding the automated builds for the poor quest checks. And yeah, we have been doing all the work needed to make the checks pass whenever we were adding one. In terms of static analysis, I started with Ropro and SpotBucks, did a full sweep of the code base. Thankfully, these tools did not find that many changes. It only took a few months per tool to clean up the code base. A few months solar time. I was working on it. In the weekends, a few hours per weekend. And then we started adding also PMD. PMD, thankfully, it does a lot of checks, but thankfully, it's configurable. So I basically started adding checks one by one and fixing them and then adding them in poor quest checks and then start over with another verification, start over, and so on and so on. And after well over a year, I got to the point where we are. Since the tools are useful and they can also catch modern programming patterns and so on and usage of old APIs and so on, we started doing also that and set up PMD so that catches usage of old Java API, usage of old Java API, and Java API, usage of old Java syntax, and so on so that we cannot use it anymore in the code base. And that also made it at the same time more modern, but at the same time also more uniform. And thankfully, tools like IntelliJ can perform automatically most of these refactorings on the entire code base for you. So it wasn't like with PMD checks where I had to do everything by hand. Final thoughts. Having these tools and having the code base compliant with these tools gave us common formatting, not that code, not trivial bugs. So when you jump in on a bit of code that you have never seen, understanding gets easier, fixing issues gets easier, and developing a new feature gets easier. Because it's easier to get started from a cleaner base. This is something that I actually found out by myself a few years ago when trying to contribute to QGIS. They already had automatic formatting and a few other checks. And the first impression that I had when I was coding in C++ was, whoa, the code is so clean. And it actually, on one side, made me want to preserve that cleanness, and on the other side, it was just easier to understand. All these tools are actually helpers if you try to leverage them, because it means that you don't need to run virtual machines to test code on other operating system, or run locally all the integration tests, and so on. You can just do your code change the best way you can. Throw the thing at GitHub, and all the integration tests will check and find problems for you. So these are the tools that are helping me to provide better results, and not cops that are there to catch me. The social effect in general is good. Maybe some of you have heard of the broken window theory, which can be applied to software as well. So when you reach onto a code base and you start making changes, you get a feeler about how it's structured, how clean it is, how well it's working, and you basically try to adapt to that level. So if you start seeing broken windows, you don't mind breaking a few others, because there are so many which are already broken. So neglected code begets more neglected code. Instead, clean code brings people to contribute quality code. There is also a social effect on reviews, because the QA tools being automated, they are fair. They give the same treatment to everyone. I'm receiving, when I do a request, from this tool exactly the same treatment as the random contributor receives. So we can't say that, oh, the maintainer is being peaking nasty with me. Those checks are uniform for everyone. And we also limit to the frustrating ping pong between review and contributor. Back when we didn't have these tools, you do a first request, and you cannot see the changes, because there are two formatting changes. You ask them to be removed. OK, now you see the changes, and you start noticing that code, and you start noticing obvious bugs. And you ask for changes. And then when those are gone, maybe you start stepping back and concentrating on the bigger picture, and you find bigger problems, and the contributor says, well, damn it, it's the third time you review this poor request. Are you mad at me or something? No, I'm not. It's just that I'm going through steps in trying to figure out where the problems are. However, this is not all roses, because there is peculiar or particular developers trying to contribute, and they say, oh, no, I can't live with this formatting, or I can't work without the library, or I don't care about these checks. The fact is, when a project is large, as G2s and GOServer are, they are group effort. And we cannot afford to have everyone push their favorite direction. We need to compromise. Just to give you an idea, it's not that I particularly like the Google Java format output. In some cases, it's actually ugly, and I have to adjust the code, maybe split it into more methods and add more support variables to make it more readable once it's formatted. So I'm not in love with it, but it's the same formatting for everybody. Others come in and say, oh, what about all these checks? It's too hard to contribute. I don't care about PND. I don't care about writing tests. I don't care about having to run Maven on my local machine. However, you found code that was cleaner because of all these tests and all these checks, and now it's your turn to keep it that way. It helps to have a clear checklist on the pull request where you communicate clearly what is needed for a merge, and we do. And it's, again, important to be fair. This checklist has to be applied to the core developer and the casual contributor the same way. Everyone has to respect it as is. And if you really find someone that is getting mad at you because of the rules of contribution, because of the review, don't worry. Just install a stalemate and leave the pull request there, waiting for the contributor to decide that they really want the change to be merged. And if that doesn't happen, the stalemate will close the PR for you. And no more discussions. There is another approach that you can take, and this is the last slide, get maintenance funding. If most of the problem that we have here is that we don't have enough developers and we don't have enough developer time dedicated to maintenance and pull request management, review, and so on, if you can find a way to allocate some money to that activity, then you can hire people to do it during their working hours, which is something that has just happened with GIDL. And that also adds a human touch to the whole process, because I agree that automatic QA tools can be a bit dry. However, this requires your community to donate to the project. And in G-server and G-tools, we have the appropriate donate buttons, but I can tell you they are not used much. And this is it. Thank you. We cannot hear you, Til. You're on mute. Yeah, sorry, I was muted. Thank you very much for your talk. Really, really interesting stuff. And for me, really interesting to see how you manage that in the back end. I think a lot of people have the same problems. And of course, we have some questions. And the first one is, when you reformatted the code, what happened to the open pull request? Did they conflict? No, the pull request has to be provided in a formatted state already. And if the contributor has run the Maven build at least once before sending the pull request, they will get the right formatting. So when you build with Maven, it automatically happens, the formatted runs. So at that point, you just have to submit the pull request and it's done. If you don't build, so the thing is, when I see a pull request which is miss formatted, I already know that the contributor didn't run the Maven build. And they didn't try to run the test and so on. And that's bad. OK. There's another question. I think that's an easy one. Can you give an idea about the percentage of your time you spent on GeoTools, GeoWebCache, code review in the past and now? And another question came up. Do the tools you use automatically apply formatting to code in pull request? Or do you ask the contributor to format the code before sending the pull request? When you build with Maven, it happens locally. The formatting happens automatically. And then when you send the pull request, it's already in the right formatting. We don't actually have a way to change the code in the pull request in a dynamic way. It has to come in already formatted. But as I said, if you build with Maven before sending a pull request, it's already formatting as it should be. So it's really easy. One just has to build with Maven, which should be done anyways to run the integration tests. OK. So and we have the last question. How does this approach fit into the agile development environment where uniform code approach is not necessary present? So we have a specific situation for open source projects. But when I talk to customers and they apply agile and Scrum and so on, they typically have a sonar running on the code base and sonar flags, all sorts of violations. And they typically then allocate every few sprints a story to reduce the code depth as reported by sonar. So that's how they typically do it. That said, most of the people I talk to in interviews and the customers, they also have some sort of either formatting check with maybe check style or some automated formatting tool. OK. Thank you very much, Andrea. A lot of information for all our brains. And thank you very much again for your talk. And yeah, we are directly on with the next talk here. And luckily, the presenter appeared. Hi, Matteo. Hi, Tia. How are you? Yeah, I'm fine. I'm happy that you're here. Yeah, OK. Maybe you share your screen. Yeah, sure. Let me just.
|
Working in large open source projects, can be challenging, especially trying to keep everyone on the same page, and generating code that has enough similarities to allow shared maintenance. The advent of platforms like GitHub also made it easier for one time contributors to participate, generating in the process a fair amoumt of “review stress” in the project maintainers. The presentations covers automated QA tools as a way to make code more uniform, avoid introduction of some types of technical debt, and reduce review efforts on pull requests, while also raising the level of the review.
|
10.5446/57423 (DOI)
|
I'll just mention that there's a little bit of link snafu, my bad, but the speaker should be here shortly and then we'll start the talk. Looks like JP has joined. Yes. Hello. Hello. Hey, Don. So next up is JP Miller, Associate Director, Center for Geospatial Solutions, the Lincoln Institute of Land and Policy. And if you just want to share your slides, I'll bring those up. Here we go. Great. Take it away. All right. Thank you very much. So hello and thank you for welcoming me here today. I will be presenting by myself today, my colleague Jeff Allenby and his wife, welcome the baby girl to the world this week. And so I am going to be discussing how to address the last mile delivery challenge in environmental data. I wanted to briefly touch on where I work, which is the Center for Geospatial Solutions. And at our core, what we want to do is help our partners ask the right questions about where in the landscape should I be working and what should I be doing there. And we want to make sure they have access to the right data and the right analyses to be able to drive actionable insights. Ultimately, we view success as having technology fade into the background as much as possible for our partners. I'm going to spare the phosphor G crowd fully defining GIS. But I think it's important to note that technology and GIS at its core is a suite of different tools that allow you to organize, understand, and communicate information about the world around you. Everything has an aspect of place. This is a really wonderful quote from the American philosopher Edward Casey that I think really illustrates that point. And he says, to be at all, to exist in any way is to be somewhere. And to be somewhere is to be in some kind of place. Places as requisite as the air we breathe, the ground on which we stand, the bodies we have. We're surrounded by places. We walk over and through them. We live in places, relate to others in them, die in them. Nothing we do is on places, really. And if you spend time outside, you know that every place isn't the same. At CGS, we want our partners to better understand the landscape around them. It's not really a replacement for just that gut feeling and knowledge after working decades in the field. But what we're trying to do is help provide an understanding of what you haven't seen or where you haven't been yet. So you can really evaluate everything across the board. By improving this understanding, we hope to accelerate progress in several key areas that threaten the quality of life for all people. And some of those areas are resilient climates, thriving habitats, clean and abundant water, food security, and decision making and resources. But to circle back to the point of this presentation, I want to first kind of define what we mean by last mile of delivery. We're all becoming increasingly familiar with the complexity of modern supply chains, especially during the pandemic. If you tried to buy furniture, cars, or computer chips, you likely ran into problems. And so here we can see a highly simplified supply chain for a commodity that many of us have probably consumed today, which is coffee. The raw material, coffee beans, are grown on farms around the world, from Vietnam to Ethiopia and Ecuador. And they're harvested and transported to facilities where the manufacturing step occurs. This includes roasting, packaging. If you like bad coffee and pre-grinding, packaged coffee is then sent to distribution centers where it can be shipped to a variety of places before it reaches their consumer. It could be in a coffee store, like Starbucks, could be in your grocery store, or it could end up on your porch from online delivery. This last step of the process is referred to as the last mile. It's typically the key to customer satisfaction and the most expensive and time consuming part of the logistics process. Consequently, a great deal of time and attention has been directed at improving this step of the supply chain. We would argue that the last mile deserves similar attention for environmental data. In fact, we see a lot of similarities between the challenges the last mile delivery poses to supply chains and the difficulty of obtaining actionable insights from environmental data. As we move along this supply chain process, we basically have four steps for the environmental data supply chain. First is the world. It exists and has a variety of environmental phenomena that occur. Data is the raw and unprocessed facts that we capture on these phenomena, according to some agreed upon standard. We have information, which is data that's been processed and aggregated and organized into a more human friendly format. Commonly it's shown as data visualizations, reports, or dashboards. These are all common ways to present information. But lastly is insight. This is gained by analyzing data and information in order to understand the context of a particular situation and draw conclusions. And those conclusions lead to action. And that is the last mile of environmental data. In our example here, we can see how you go from satellite sensors recording data to issuing a hurricane evacuation order. And we begin with warm waters from the Sahara, creating a cluster of thunderstorms off the West African coast. In the U.S., the National Oceanic Atmospheric Administration, NOAA, has satellites that measure infrared and visible radiation from the atmosphere and Earth's surface in real time. They detect initial measurements about wind at various levels in the atmosphere, sea surface temperatures, and cloud properties. This imagery is processed and used to determine where a storm is located, knowing what direction it's moving, and estimating its strength. Based on that information, the National Weather Service advises local jurisdictions to issue evacuation orders. Now I picked this as an example because hurricane track forecasts are really one of the great science successes of the past few decades. In 1990, the average three-day forecast was off by around 500 kilometers, or 300 miles. Today, we average about 13 kilometers, about an eight-mile difference in predicted versus actual rainfall location. A five-day track forecast today is as accurate as a three-day one was in 2001, just 19 years ago, or 20 years ago. This is tremendous progress. However, we still see devastation in local communities and have difficulty making evacuation orders and executing them, despite our immense scientific progress and prediction. We still cannot evacuate major cities properly, and when we do evacuate, we do not have a adequate emergency shelter facilities for those who flee or cannot leave. We know that when communities and individuals try to recover, our aid is not equitable. A recent study found that on average, white families gain net wealth following a disaster, while minority communities lose net wealth, further perpetuating the racial wealth gap. And lastly, while we've gotten better predictions, we know that climate change will continue to make these storms more intense. And that the accompanying sea level rise is going to put more communities at risk. Recently, Hurricane Ida came up from a tropical storm to a category four hurricane in three days and with its wind speeds doubling in the last 36 hours. Wormer waters from climate change fuel rapid intensification, and this can catch communities off-guard, especially given our confidence in other aspects of forecasting. Many people caught in these storms say they could ride out a category one, but would have fled a category four or five. Despite our impressive and important progress, we are still not getting enough people out of harm's way and are leaving the most vulnerable communities behind. Quite simply, there's not been enough focus on building capacity for this last mile of transforming data and information into insights and action. There's a lot of great work going on at other levels of data collection and information creation, and I don't want to minimize their importance. Scientists at NASA's Jet Propulsion Lab are using machine learning to develop models to improve the accuracy of detecting these rapid intensification events. We need this, but this information alone is not going to evacuate people effectively, provide shelter for those who cannot flee, or equitably help build back these communities in a way that adapts the rising sea levels. This is where we at the Center for Geospatial Solutions are working, and we encourage others to join. I want to go over some of the really important environmental data sets that are out there, and just to repeat this, data in this context is the raw and unprocessed facts that we're capturing on environmental phenomena. So one of them is satellite imagery. There's a lot of really good satellite imagery out there. One of them that's particularly useful is Sentinel-2, it's a high resolution European satellite that provides multi-spectral imaging and supports monitoring of vegetation, soil and water cover, as well as the observation of inland waterways and coastal areas. It samples 13 different spectral bands, and it's a different wavelength of light, and using different combinations of that, we can learn a lot about the Earth's surface. Another valuable way to collect data is with light art. It's a remote sensing technology that uses light in the form of pulsed lasers that measures ranges to the Earth. These light pulses, combined with other data recorded by airborne systems, generate precise 3D information about the shape of the Earth and its surface characteristics. It can be flown from airplanes or drones. On the left, we can see a cross-section of a light art point cloud superimposed on the corresponding landscape. Return ranges from the light art can be classified and helped distinguish between things like trees and buildings and the Earth beneath it. On the right, we can see this from a kind of a three-dimensional perspective. I also want to talk about a great scientific open-source project using our program in the main language, R-OpenSci. R-OpenSci is a great resource for accessing scientific data sources directly. The project was founded in 2011 and provides a rich ecosystem of pure UDR packages and tools that we can use to access data. There's hundreds of packages and datasets available. These are a few of my favorite. R-Noah provides access to no datasets like climate, CIS, historical observation, tides. The REBIRD package gives you access to the E-BIRD database for bird observations. NASA Power accesses the NASA meteorology, surface solar energy, and climatology data. The USGS's data retrieval tool provides access to USGS's hydrologic time series data. MODIS tools gives you an API to access MODIS's high frequency, low-resolution time series data. OSM data gives you access to open-street maps, rich crowdsource data. GSODR provides global weather data and PATH viewer gives you access to work with animal movement data. There's a ton of different options for data out there. Now I want to move on to some of the information sources. I want to again mention in our context what we're talking about here is information is processing, aggregating, and organizing that raw data into a more human-friendly format. Turning that raw data into information. Science generates a lot of data, but collating and transforming that data into action can be really difficult. When you add to that the sheer volume of data available, things get even more difficult, especially if all that data isn't available to everybody who might need it. AI for Earth takes a step toward rectifying that by offering services to organizations that may not be able to access them otherwise, and by centralizing data from government research agencies around the globe in the Azure cloud. Microsoft's planetary computer gives computing resources to scientists working on environmental issues. This powerful technology can help us access raw data and transform it into valuable information. On the right we can see an example of taking some raw data and transforming it into land cover for the whole U.S. at a higher resolution and frequency. There's a bunch of great data sets that are hosted on here. A couple of the ones that I wanted to cover are the DayNet collection, which is gridded estimates of weather parameters for North America, Hawaii, Puerto Rico, at a daily monthly and annual summary. The GOES, our cloud and moisture imagery product, provides 16 reflective and emissive bands at high temporal cadence over the Western atmosphere. This is the data that's really being used for that hurricane prediction that I talked about earlier. JRSC Global Surface Water contains global surface water distribution and dynamics from 1984 to 2020. The USGS's 3DEP program provides high quality topographic data for a wide range of three dimensional representations across the U.S., both natural and constructive features. I also want to highlight some other important individual contributors out there. One of them is OpenET. ET refers to evapotranspiration, which is an important part of the water cycle that's difficult to model. OpenET uses publicly available data from multiple satellites and weather stations to bring together an ensemble of well-established methods to calculate evapotranspiration on a single platform. Reliable, trusted, and widely available ET data, the field scale can be used for a lot of different purposes. You can expand irrigation practices that maximize the crop print drop and reduce costs for fertilizer and water. You can develop more accurate water budgets and innovative management programs that ensure adequate supplies of water for agriculture, people, and ecosystems. It also supports trading programs that can protect the financial viability of farms during droughts while ensuring the water is also available for other beneficial uses. Another great information source is the map of biodiversity importance, or MOVI. That provides a portfolio of maps that identify areas critical to sustaining the United States rich biodiversity. The project was a collaboration between the Nature Conservancy, Microsoft's AI for Earth program that allowed nature-serve to create a comprehensive data set of models for over 2,200 at-risk species in the contiguous U.S. And the brighter colors here indicate where land and water protection will most benefit the least protected yet most threatened biodiversity. Another one is the Forest Service Big Map program, and so the U.S. Forest Service conducts the forest inventory analysis program on the national forests around the U.S. And almost a decade ago, they took a spectro-temporal approach using moderate spatial resolution imagery from MODIS to produce a continent-wide raster data set of numerous forest attributes. Today, with ready access to imagery from a variety of sensors, as well as powerful cloud computing environments, the Forest Service is a very, very, very, very, very, very, very powerful cloud computing environment. The Forest Service is shifting its focus to do the same thing, but with dense time series of landside imagery. The Big Data Mapping and Analytics Platform, aka Big Map, uses Amazon Web Services, Jupyter Notebooks, and GIS to better show changes to vegetation over time, whether their vegetation is greening up or it's dying off in some essence. And it helps better detect different types of forests, their tree composition and the overall forest structure. The last one I wanted to highlight here is the Nature Conservancy's Resilient Land Mapping Tool. And resilience sites that are really areas of land with high microclimatic diversity and low levels of human modification. And they provide species with connected, diverse climatic conditions that they're going to need to persist and adapt to changing regional climates. Our site's resilience score estimates its capacity to maintain species diversity and ecological function as the climate changes. This is really a platform approach. And the nice thing is you can do pretty much everything you need from a website. You still need some specialized knowledge, but this is really getting closer to making last mile delivery possible without specialized software. But I want to talk about last mile delivery and how we're working to address some of the challenges there of being able to turn that information into actionable insights and actual action. So just to refresh on that environmental data supply chain with the world where environmental phenomena occur, with raw data that detects those phenomena and are collected to some tech standard, that's transformed that raw data into information that's more human accessible. And that might be through data visualizations and then lastly that information is used to actually take action and understand what's going on. And so we're talking about that last part of the process. And so one of the ways that we're, one of the challenges that we're working to address is what I would call data fluency. And what data fluency means is really data awareness and access. And as simple as that might seem, the vast majority of organizations don't have the data they need to be able to tackle their most pressing challenges. In some cases that data might already exist and they just don't know about it. There's the classic saying, if a tree falls in a forest, does anyone hear? If you put out a data set and nobody knows about it, is it really going to create actionable insight for anyone? I've already covered a lot of just a couple of the amazing free and open data sets that are out there. And it's difficult to keep up with the universe of environmental data. So we use our knowledge and work with partners to make them aware of what data and information is out there. In other cases, the issue might just be access to the data and information. The access barrier can be technological or resources or knowledge based. And we work to overcome each of these potential barriers with our partners. And one area that we're looking at is connecting GIS students with small land trusts who simply lack the resources and ability to use GIS for some of their most basic needs. We're also working to develop a new program called Adopt a Land Trust, which will be a platform to meet 80% of the common needs for land trusts to help them meet their 30 by 30 conservation goals. Another challenge is just analysis capabilities internally. We want to make sure that partners can do the analysis correctly. You can't just give a 16 year old a Ferrari and expect everything to work out. Transforming raw data into valuable insights is a difficult task. Our partners can rely on us to properly perform complex analysis, such as using drones to collect high resolution multi-spectral data, classifying it using machine learning algorithms, and translate that to them. We also recently created the most accurate and up-to-date map of the Colorado River Basin in existence with collaboration with the Babbot Center for Land and Water Policy, who's also at the Lincoln Institute. The Colorado River is one of the world's most geographically, historically, politically, culturally complex waterways. As a result, creating an accurate map of the basin, the vast area of land that drains to the river and its tributaries, is not a simple undertaking. People frequently pointed out the flaws in available maps and suggested that addressing them could contribute to more effective water management. No one seemed of the capacity to fix these problematic maps. We decided to embark on a mapping project of our own. The final product is a physical and political map of the entire Colorado River Basin. It includes the locations of 30 federally recognized tribal nations, man-made features like dams, reservoirs, trans-basin diversions, and canals, federally protected areas, and natural waterways with indication of year-rounder, intermittent stream flow. We're making the map freely available to hope that it will become a widely used resource, both within the basin and beyond. But we're not just putting this data out there. We're also working with the numerous local communities in the American Southwest to see how they can better manage their water resources and reduce water consumption to the benefit of the entire Colorado River Basin. Another challenge is this big data management. There's a lot of difficulty in working with big data, and as much as people don't want to admit it, Excel has limits. Additionally, more data isn't automatically better. It can be difficult to ascertain what's useful information in these datasets. Another way to put that is that the noise, which is the random unwanted variation and fluctuation, can be amplified, which makes it more difficult to detect the signal, which is the meaningful information. As Nate Silver put it, one of the pervasive risks that we face in the information age is that even if the amount of knowledge in the world is increasing, the gap between what we know and what we think we know may be widening. For 95% of environmental organizations, it's beyond their internal capacity to handle big data and pull out valuable nuggets of insight. We're also working to streamline the workflows. To help with these issues, we're creating models and workflows that incorporate big data and make it much more accessible. We also work with partners to conduct organizational surveys to identify how data and information currently moves throughout the organization, and then collaboratively work with them to create a data model that really suits their needs and help them migrate to that. We help them with each step of that process. At our core, what we're trying to do is turn data into actionable information and empower our partners to be able to make more informed decisions about where to work and what to do. We typically see organizations fall into three different buckets. One would be dipping your toes in. Maybe you're doing some online maps and apps. You're using a little bit of the available information out there. There's also people we'd say are more up to their waist. They're building things. They have some internal capacity. They maybe use some consultants. Then there's those of us who are diving in deep using AI machine learning and automized mobility data and Earth observations. People at this conference are the people who are diving in deep. You're creating products, software, and data. You have to design for all three of these categories if you want to be successful. Just because you're an expert in deep doesn't mean that everybody else is. Maybe around 10% of organizations are actually in deep in environmental arena. Maybe 5% of the design is actually serving 60% of the organizations. We need to design for accessibility with all three. That's something that we wanted to cover here today. This time, we'll take any of your questions. Awesome. Thank you, Dippy. Any questions from the audience? Very cool. Thank you. Thanks again for the presentation. We'll have a great rest of your conference and we'll announce the next speaker in a few minutes. Okay. Thanks.
|
Partners throughout the environmental world often struggle with integrating the right data, information, and analysis tools into their workflows. This happens for a variety of reasons, but often can be reduced to the fact that they are under-resourced to make use of new opportunities. There are increasingly sophisticated tools and datasets that are being made available, however the uptake of these capabilities, especially to inform local-scale decision making, is lagging within the environmental sector. To overcome this challenge, there is a need for partners to play an integrating role in the environmental world; focusing on ensuring local partners are aware of the emerging resources that now exist to address their challenges, have access to the software and technology needed to seamlessly incorporate it in their workflows, and understand their unique challenges without relying on a "one size fits all" approach. This session will focus on lessons learned from the work of the Center for Geospatial Analysis working with partners of all sizes to help identify pathways that deliver real improvements to decision making by integrating data and technology. Additionally, the session will highlight remaining large scale "questions" that deserve additional focus from the FOSS4G community about how new efforts can be focused on solving these challenges.
|
10.5446/57424 (DOI)
|
So I will welcome them to the stream. Hello, how are you? Nice to see you connected. Are you going to share Martha your screen? OK, great. Martha is a machine learning engineering at Develop and Seed. Caleb is a data scientist on Microsoft for good team. Hi, thanks so much for having us, Fasforji. Again, my name is Martha. And I'm also presenting here this morning with Caleb about land use land cover mapping. So before we dive in, a high level overview of why land cover mapping matters. Land cover maps are essential for conservation, climate research, and for environmental planning. There's so many organizations and researchers doing really impactful work with land use land cover mapping. For example, a nonprofit called American Forests, one of their work is to understand how trees are distributed across urban areas in the United States and assign each urban area a tree equity score. Because unfortunately, trees are not evenly distributed throughout urban areas. And sometimes there will be a higher concentration of trees and wealthier areas and a lower concentration of trees and areas that are not so wealthy. On this disparity between tree coverage has huge implications for the residents' well-being and happiness as well as the impacts of climate change and urban heat. And the first step to understanding where trees are is to have a really accurate land use land cover map. Additionally, it's really important to have land use land cover maps on demand and as quickly as possible. Because sometimes it can take years to create these really high resolution land cover maps, especially if field work is involved. But a lot of these applications and use cases need land cover maps as soon as possible or conditions on the ground are changing. So being able to update is really important, especially with the availability of high resolution imagery as also such a high temporal frequency. So we would like to introduce a mapping tool called Pearl. We have this video of Pearl playing in the background as I talk. And what we're seeing here in a few seconds is live inference will be running with a nine class model over an area in Boulder, Colorado, which I'm from. And we see here the tiles are starting to fill in. In this video, I didn't edit. So this is real time. This area is about, I think, six kilometers. And we see that we're just running inference in just a few seconds, which is really exciting. It just kind of nicely fills in. And the model is doing a reasonable job. But this initial inference is just the starting point. We'll discuss this later in more detail. But you can also retrain your models in Pearl. So if you want to make edits, you can. And then retrain the model and inference will run again as we see in this next video. So for example, users can also add new classes with Pearl. We have two starter models available in Pearl, which we'll talk in more detail later on in this presentation of four class and a nine class. But we understand that a lot of land use and hover mapping, the classes are important or really specific to different use cases. And we can't capture all of those. So we want users to have the flexibility to capture classes that are important to them and their applications. So we see here in this example, we can retrain a model and add a new class to capture more of the sand on the beach. And you can add a new class by using the polygon tool or drawing free hand within the UI. And you can correct predictions if the model got incorrect or add new classes. We're just adding more impervious surface for the roads and the buildings here. Then you can toggle retrain and new inferences pop up. And you can see that there's been a little bit of improvement, especially along the shoreline. We're now able to capture some of the sand. So at a high level, what can users do with Poro? Poro allows anyone to harness the power of machine learning to create a land cover map in the browser. To create this map, you don't need to be an expert in machine learning, nor do you need to be an expert in land use land cover mapping. You can just go and make the map. And a lot of these tricky decisions and architectures have been abstracted away from the user. But there's still transparency in all the choices we made, especially with model metadata cards, which we'll talk about later on. Additionally, sometimes it's really challenging to access imagery. So within the browser, Poro gives users access to CloudFree NAIT Mosaic, which is 4-band imagery at 30-meter resolution. One limitation of NAIT is that it's only available in the United States. The 4-bands are red, green, blue, and NIR. And NAIT is a product of the US Department of Agriculture. For the rest of this presentation, we'll go through a little bit about the back end infrastructure of Poro, the models that are implemented in Poro today, inference and retraining, how users can validate and QA their maps, and feature work we have planned for Poro. So diving into the infrastructure, this is how the back end and the front end interact with each other. In this diagram, the client pool represents the UI, which users see. The REST API load balancer allows the app to appropriately scale based on the number of users that are present. We also have an auto-scaling group for the GPU, because inference is so fast, because it's running on a GPU behind the scenes, and it can scale on with some limitations. We don't have infinite GPUs, but based on the number of users that are accessing the site at a given time, we use web sockets to help communicate between the front end and the back end. And we're also persisting model inferences from the user, as well as model checkpoints in a database, so users can map and then leave Poro and come back in a few minutes, a few minutes. Or a few hours and pull up their map exactly as it was before without having to redo previous work, which is really exciting. Diving a little bit more into how our imagery is served, we use an open source package called T-Tiler to create and serve a cloud-free, four-band Nape Mosaic. For the models available in Perl, right now we have two models available in the tool box. The models available in the tool but more are being developed and will be released in the next month to two months. The first model we have available is a four-class model that was trained on the east coast of the United States. And then we also have a nine-class model that was also trained over the east coast of the United States. And we have information about these models available in the UI, indicating their label sources, kind of a performance metric for all the classes that F1 score, the years the imagery was from, as well as the imagery resolution. And this image is static, but in the UI, users can toggle over each of these bars to see per-class F1 performance over the holdout test set, as well as the class distribution. Diving a little bit more into the model training data, it's really tricky to get high-quality land use land cover training data. So we're really excited to benefit from the work. The University of Vermont Spatial Analysis Lab has done. The Chesapeake Conservancy, as well as the US Forest Service for curating some of this really high-quality land cover. They work with LiDAR, as well as the NAIC imagery. And that's been really beneficial to use to train our models with. For the starter models, we have four classes and nine classes. And both of those were derived from the NLCD label class data set, which is, I think, 13 to 17 classes. It depends on what region you're working with. And as we'll notice, the four-class model is an aggregation of the classes available in the nine-class model. So it just depends on what's best for the user. Sometimes if the user has really specific class requirements, it might be easier for them to run that initial inference with the four-class model and do retraining and add classes in the works with their use case better. In terms of model architectures, we've experimented with a few different ones, including FCNs, Units, and DeepLab v3. There's definitely pros and cons to each of these. The models we have today in the tool or FCN, but future models are going to be more DeepLab v3-based. And our modeling was powered by open source tooling, including PyTorch, SMP, and a package Caleb, and his collards have worked on called Torch-GO. Some of the modeling challenges that we faced are regional generalizability. So how well can this model that's trained on the East Coast do if we want to inference on the West Coast or in the Southeast United States? We certainly can run inference, but there's definitely a little bit of a drop in performance. But that's where retraining comes in and can hopefully help address that. But in the future, we hope to provide regional specific models to cover the whole US well. Another big issue we faced is class imbalances. This is an example of a seven-class model that's currently in progress. And we see here that trees, as well as grass and shrub, are the most prevalent classes, while bare soil and water make up about 1% of the data set. But for other areas, bare soil makes up less than 1% in the data set. Other areas, water is a fairly common class. It just kind of depends on what geography we're working in. But to try to address this, we have used focal loss during training, which has helped. And we're also exploring other data augmentation techniques to address this and get better performance across all classes dealing with this imbalance. So again, we're really excited about running inference with Perl because it runs on demand in the browser. So we see here again, this is just a little bit sped up. And kind of all the infrastructure that Perl is running on is backed by Azure and Microsoft's planetary computer. I'll hand it over to Caleb for him to discuss retraining. Yeah, thanks Martha. So a large key feature of Perl is it's allowing users to interactively fine tune the model that sits behind this kind of infrastructure that Martha was talking about. So after you run inference with your model, you can interactively look at the predictions and find where the model's making mistakes and then add training points through a points polygons. You can even upload your own GeoJSON file that has the label masks in it. And those points are then used combined with the imagery to fine tune the last layer of the underlying convolutional neural network that's making the predictions. And this is an iterative process. So after you've looked, corrected the errors, fine tuned, it might be making new errors, it might have corrected a little bit of some class and still needs more corrections on another class. So iterating over this retraining step allows users to get to the performance if they want. Another key feature that comes about as retraining is the ability of users to add new classes. So if you add points like Martha was showing earlier for a sand class, then the model can on the fly learn to disambiguate sand from, say, baron. The pro interface allows users to save these training model checkpoints. So after you do a bunch of work in getting the model to the point where you want it, you can save it and come back for later. And then, of course, there are other nice bells and whistles here like the ability to undo points that you've added or erase work. So this slide just shows what one iteration of this process looks like. So we have the imagery over here on the left-hand side. So this is the high resolution nape imagery like Martha was saying. The initial model has some confusion now in this imagery between the shadows and the water class. So it's classifying things in shadow as water that's obviously something that we don't want. We would rather classify that as impervious surface. So after, I think, just two or three iterations of this free training process that I was describing, you get the output over on the right. I think we have a video here of what this looks like in pseudo real time. Here, the user is correcting a structure class. And then it warps ahead to after the training is done. You can see over in the right-hand side of the screen that the F1 score for the structure improved. On the technical side, what's going on is the models generating a per pixel embedding. And then those embeddings are aggregated with whichever form of supervision you're using, whether like points, polygons, and whatnot, to create a table of pixel embeddings and then associated pixel labels. Those tables are used to fit a logistic regression model. This happens very quickly on the CPU, and the weights are copied back to the GPU in front of time. So everything happens pretty quickly. And I think handing it back over to Martha now. Yes. As we've already discussed, you can add a new class via retraining. In the lower left-hand corner of the UI, there's this plus sign, and it says add class. Users can really go wild with add class. You're not required. You can add more than one class per retraining iteration if you want. And that class will be persisted through subsequent retraining iterations. And this is kind of a static view of adding sand, as we saw. And it's led to some improvement right along the coastline. Also, once users are satisfied after they've retrained, they can share their work by downloading a geotiff of what's rendered on the screen, or they can create a shareable URL. But before they share, users want to feel really confident in how their model is performing and understand it quantitatively as well as qualitatively. So we have an analysis panel in the UI of Pearl that hopefully helps users on their way to that level of understanding. So after each inference as well as after each retraining, there's a class distribution per AOI. That's kind of the top-level bar graph. This is showing you what percentage of pixels are water, what percentage of pixels are trees, for each of the classes present to the model. And then something that's really important is understanding is retraining helping the model? Is it leading to increases in performance? So we also have available F1 scores for every class. And these are calculated by, so when the user is drawing a polygon or using a building polygon box or the freehand tool, a random number of points are sampled from that geometry and submitted to the model for retraining. But we hold out about 10% of those points as a holdout test set for retraining. And that's where we're able to calculate the F1 scores to understand how retraining is helping the model, saving and compare between different checkpoints to see what's going on with those. Additionally, you can use Pearl as a very sophisticated paint brush and refine predictions in the UI. In the same kind of tab, you can use the refine tool. And you can use Pearl like a paintbrush, basically. So for example, with this artifact, we see in the upper left-hand corner on the first image, the model got confused because there happened to be a boat flying, not flying, a boat going through the water when the NAIT imagery was taken. But really, even though it's white in the boat, it's technically an impervious surface. That's not what the land is underlying. The land use class should still be water. So sometimes those small changes, it's too tricky to capture them with retraining. Or it's just not worth it if you can just circle. But retraining is still very straightforward. So sometimes it's faster to just use the refine tool and do some edits to clean out before you export your final map. So going into a little more, we are really excited about Pearl. But we're most excited about how it can help real life users, especially for conservation applications. And we discussed this a little bit in the beginning of the presentation. But we're just beginning an initial partnership with American Forests. They've already done a lot of work for identifying and quantifying urban tree canopies and understanding tree inequities. But they're trying to scale this work to make it be US wide. So we're hopeful that Pearl can become part of their work flow. We're working with them on a pilot for five cities currently. And if that goes well, we would like to help facilitate expanding their work across the US using Pearl. Additional future work we have planned is integrating Pearl with OSM data. This will allow users to pull data from OSM for faster retraining. We're actively working on developing more geographically diverse regional models. In a future change, there'll be a big addition. But something we're really excited about is exploring, expanding imagery access beyond NAEP to use Sentinel imagery so model, land use land cover models can have global coverage. Yeah, we really would invite you to try out Pearl. It's freely available on this back to my open source software as we've all discussed. And you can access it at landcover.io. Sign up and start making some maps and let us know how it goes. Additionally, Pearl is a huge team effort. So I'd like to give a huge shout out to many of the DevC team members that were involved listed here, as well as our friends at Microsoft and everyone else who's helped us with Pearl. So thank you so much. We're super excited to get to talk with you guys. And we look forward to answering questions. Thank you for the wonderful presentation. It was really interesting and clear. It looks like the software works really nice. I really liked the brush that you can just help to these little changes that sometimes is very, very difficult to automatize. So that was a funny tool. Let's check with the questions. Oh, we have a lot of questions. The first one is, how do you do this? Questions? The first one is, is Pearl an open source project? Currently, the code is not open source, but it will be soon. OK. Are the models pre-trained or users trained online with their data? The models are pre-trained, but users can bring their own data during the retraining step if they want. OK. Can anyone access Pearl? It was, it seems like, OK, great. OK, let me check. How does Pearl deal with cloud obstruction? We don't have clouds because our Mosaic is magical. It was specifically created using an open source software called T-Tiler. And the T-Tiler code is available. So if you wanted to make your own Mosaic for other work, you certainly should use T-Tiler. But our Mosaic is magical and combines different years of NAIP imagery to create a beautiful cloud pre-wired. Yeah, I can just expand on that as well. If anyone's not familiar, NAIP is aerial imagery that's captured by the USDA over different states in the US. They capture it on different cadences. They specifically work hard to making sure there are no clouds in the imagery. So Pearl uses a Mosaic of state by state NAIP imagery. OK, thank you. Are there any future plans to develop Pearl to work for countries outside the US? Yes, definitely. Once we integrate Sentinel imagery in with Pearl, it will work wherever there's Sentinel imagery. OK, great. And also, any Northwest US trained models? Not right now. OK. Let me check. I think all of them are the same importance. Can you export the inference outputs from Pearl? Yes, you can actually go back on my slides. Maybe you can export them as a GeoTip, or you can get a shareable URL. Yeah, it's right here in the UI. Correct me if I'm wrong, Martha. There's a maximum size AOI that you can draw at the beginning of your mapping session that restricts the GeoTip that you'll get back to something of reasonable size. You're not going to draw a box over the entire US and expect to get something out there. Yeah, that's correct. We're currently working on getting it to be a little bit bigger. Right now in production, it's about 100 kilometers. But when we release next week, you might even get more area. Great, that was actually one of the questions, so you also answer it. That's a good question. Yeah. And also, does the retrain model get registered, redeployed for others to use? And is there a quality acceptance threshold for that? Currently, you can only access models within your user account and not share them between other users. But every retraining checkpoint you save is persisted, and you can access it. So if you retrain three or four times, you can save a checkpoint at each of those retraining steps and work with it again. Sharing checkpoints between users is something that's also scheduled for in the future, but not possible right now. So it's kind of up to an individual user to determine their acceptable quality thresholds. OK, there is one more question. Can Pearl perform temporal predictive analysis? Right now, no, because we're only running and running over the same mosaic of imagery, but something that will be available in the future as you could select a specific year of NAVE imagery instead of working with the mosaic. So that's somewhat of a temporal analysis, but NAVE happens about every two years. Is that correct Caleb? Are there areas around different cadences? From the software side, I can jump in and say that it's super cool the way that Pearl runs inference straight off of the T-Tiler output. So if you do have a mosaic that's served through T-Tiler, then theoretically that can be hooked up at some point in the future. Great. Thank you for all your nice answers. They were all very clear. There is a lot of enthusiastic future users in the chat. They are really looking forward to use it. And there is also a question about the risk of introducing rear artifacts if you're using Cloud Shipping model to produce the mosaics. We're not using a model to get rid of the cloud. I feel like I misspoke. NAVE doesn't really have clouds inherently because of how Caleb explained it. The mosaic isn't based on a model. It's just based on aggregating different years of this data together. So we have smooth coverage of the entire United States. Great. OK. Thank you very much again. I say goodbye to all of you. It was really nice to have you here. And we will continue with the next talk. So see you. Thanks so much. Thank you. Bye.
|
PEARL, the Planetary Computer Land Cover Mapping Platform, uses state of the art ML and AI technologies to drastically reduce the time required to produce an accurate land cover map. Scientists and mappers get access to pre-trained, high performing starter models, and high resolution imagery (e.g. NAIP) hosted on Microsoft Azure. The land cover mapping tool manages the infrastructure for running inference at scale, visualizes model predictions, shows the model’s per class performance, allows for adding new training classes, and allows users to retrain the model. The tool helps harness the power of expert human mappers and scientists through immediate model evaluation metrics, easy retraining cycles and instant user feedback within the UI.
|
10.5446/57425 (DOI)
|
Hi Daniel, how are you? Hi, good. How are you? Fine, thank you. I'm glad you are here just in time for your presentation. Yes, me too. I'm really on the schedule, so it's at minute 30, so I will invite you to start your talk right now if you are ready. Okay. And please. And I should mention that I'm okay being interrupted with questions anytime during the presentation. So look forward to hearing people's feedback. Can you see my screen? Yes, we can. We can. It's perfect. Okay. So I'm talking about JSON to code. It's a new form of compression. And it answers this question, what if our compressed data was just code? So what is it? JSON to code compression basically converts JSON serializable data into simple executable code. Let's break that down a little. JSON serializable, that just means simple data, whether it's sort of like a CSV or just text that sort of data. It doesn't work on compressing classes or sets. Not yet anyway. So we're going to start with a sort of simple example. You all don't have to read through all of that. But basically, let me, I went and I'm going to try to go through quickly and not demo like physically demo everything because I want to leave some time for questions. But in the interest of reproducibility, I went to the phosphor G schedule and I ran this code in the console to basically take the schedule and convert it into JSON. The schedule on the left or the file on the left, that's the schedule in a JSON format. You can see it has time duration and then speakers, abstract track and room. You can start to see that the track and room, even in that first example, starts to repeat. And that's going to be important to compression. You'll also see that, of course, time is repeated about every nine lines and duration and all those keys to the objects. So what JSON, a code compression does, and it's really not rocket science or satellite science, it's rather simple. It just converts those words into variables. It looks at what's the words that are being used the most and then it just assigns those to variable names. So you can see that abstract is probably used the most or it's one of that's tied for being used the most and that's assigned to that first letter, capital A. I'm not sure if you can see my cursor, but now we're looking at the right side. And then capital B is assigned to duration because duration repeats for every object. You'll see later that we're compressing not just the values of objects, but also their keys as well. And that's possible because a lot of cool innovations in JavaScript. So this is just a screenshot. So I'm going to go to VS Code and show a little better what I mean there. So this is that same data. And if we scroll to the bottom, everything's in that first line assigning words to variable names. And then so the next line there should be line three, then assigns, creates the data, the JSON object or technically JavaScript object that's basically JSON serializable by replacing and this is just basic JavaScript, replacing these variable names with their values. So this is going to set the, create an object in, it's going to use H as the key and based on the context here, H will stand for time and B will stand for duration. And so you can see it's pretty simple. Okay, so let's continue the presentation. Are there any questions so far? Okay. Hearing none, I'll continue. And feel free to ask questions as we go on. I do not have the venue less open. So yeah, just yeah. Okay. I will let you know. I will let you know, Daniel, if questions. So now we're going to do more advanced example and I reference this in the presentation on GeoRasterLayer for leaflet and this is compressing projection definitions. So we're going to jump to our web browser here where I've already run the JSON decode compression. So I'm in the interest of time, not going to like type on my command line too much. So this is the projection information. This information was scraped from a really great resource, EPSGIO. They provide a Docker container that has the data that you see on this website. And so we're not, you know, increasing the web traffic on their site, but we run the Docker container and scrape all the coordinate reference system definitions. And so in this case, if you go to 3857, you'll see that they provide a Proj4JS definition. And so we're going to scrape that information, put it into a CSV. And then after putting it into a CSV, I've totally lost my spot here. We're going to convert it to a JSON. And so you see here the EPSG code and then the Proj4JS definition string. And the thing that I should note is that the I can already start seeing a lot of repetition, right? Like right here, there's a lot of repeating terms. There's no defs, it's definitely repeated. Units equals M, that's units equal, and I believe it's meters. So you know, meters is a very popular unit. So we'll see that repeated hundreds, if not thousands of times in our coordinate reference system definitions. So now after we run JSON to code compression, trust me, though, we'll be live demos later. It transforms to that format that we had shown earlier. Now we can see that A, the most popular term is no defs. And then B is the WGS84 datum. And we'll see is that this one's a surprise to me, but it's the GRS80 ellipsoid. So interesting, you know, you can learn interesting things about the data set that you're dealing with by running this sort of analysis as well. It's similar to term frequency analysis when it comes to natural language processing. And then we're going to scroll to the bottom here. It's a really long line, but this is the start of the end. This is with a little pre-processing step replacing the EPSG 2000 with just the number and then creating the ProjS definition from those strings that we've set variable names to. So you can see D plus L plus NU. So you can see D is ProjS, it's transverse from Arcator and L is WGS84. So you can kind of inspect your code and see if it makes sense as well. This isn't a binary format. You're not seeing a bunch of ones and zeros, but you can really inspect it and see what's going on. In the case of this library, we did add some sort of custom code at the end, just concatenated it at the end. And this is really just to optimize things a little further. But you don't have to do this. It's sort of almost a micro optimization. You'll see some other interesting things start appearing in this code too, where you see a lot of, you might see repetitions, but they're not represented by variables. So you'll see this K underscore zero. So we don't see that represented for a couple reasons. The first is that, well, if you, for example, this is a better example, if you have the word, the letter K, it might appear a lot. But if you're just going to represent it with the variable K, you're not, or variable A, you're not really saving any bytes or file size by representing such a small value. So you'll see that will factor in as well. It computes how much space, how many letters, characters are you going to save when you convert things, strings to variables. Okay. So I'll jump back here. Okay. So comparison to alternatives. So the JSON, the code is in one column. And then I have a column general alternatives. I know people can always sort of poke holes when people talk about generalities. So feel free to do so in the chat, ask questions. But this is just sort of saying what's the next best alternative. So for deployment, JSON and code deployment, super simple. It is, it's just code. There's no real decoding process. Once you've compressed it, you just run it. Compression is, is decent in, I'll show a comparison in file size shortly. There are better alternatives to compression. Like GZIP is amazing. ZIP is good too and performs almost as well as GZIP and sometimes just as well in a lot of cases. And GeoBuff is just, and then I haven't worked with flat GeoBuff, but flat GeoBuff from the documentation I've seen is amazing as well. So there are better alternatives if your main factor is just compression. But in reality, open source projects and a lot of close source projects as well don't have a lot of resources. So we care about maintenance. Maintenance is almost nil doubt when it comes to JSON and code because there's, there's no dependencies to maintain. It doesn't have dependencies. It's just code you can run. Security is high as long as the code is coming from a trusted source. And that's partly because, I mean, sorry, not your data is coming from a trusted source like the EPSGIO. You, if someone wanted to like, you should, to insert malicious code, they might be able to do that. So you don't want to compress random data from strangers. But it is, if it is something that you trust in that sanitized, once you run it, there's a full transparency into what's happening. And you can then run vericode, sonars, cube and all the, the Guff people probably know of these things and commercial companies as well. So it's because it's code, you can run static or dynamic scans on, on the output of JSON and code where as, you know, it's much harder to run dynamic scanning or, you know, dynamic or static scanning on, on binary data. And we've, we've seen examples of that. You can Google it. I don't have an example with me, but like where people have inserted malicious Bitcoin in, in wallet stealing code as binary data into GitHub repositories. And then that's executed. And it's just, it's hard to, for security scanners to scan that. And transparency, hopefully that makes sense now. So comparison, oh, sorry, did I, oh, I'm sorry. There's two slides here. But wanted to mention some, some more nuance. It does a really good job of compressing text when you have a Geo JSON with a lot of properties like you might with OSM data or Proj for JS, JS definition strings. If you're really just lines without really much text to go along with it, not a lot of attributes, then it's not going to be the solution for you. It doesn't do this sort of numerical compression and that, that sort of thing that, that others excel at. The compression time, it's not fast. But if you don't care about how long it takes to compress something, then it's good for you. The decompression is ridiculously fast, fast because it's just executing code. It doesn't, you don't have to load a zip or a GZIP library or you don't have to load a Geo BUFF library into the browser. Which can be awesome at times, but it does take some time to load those libraries. You can just run it. Okay, so now I'm going to show an example of the VS, some example of some compression that I did recently. Let me first show the data we're talking about. Can you see my VS code? So there's this parks.geojson. It comes from OSM. You can see there's a lot of attributes, properties to the data, different tags. You can start seeing highway. It gets repeated a lot. So this is really text rich data. And when we run compression on it, JSON, the code compression on it, does fairly well. You can see originally it was 11 megabytes. And then it was originally 11 megabytes. Based on a code compression, converts it to 6.9 megabytes. And then if you wanted to use GZIP or zip compression, that's 1.3 megabytes. So you can see it does a lot better GZIP and zip. But just going back to the pros and cons. It'll work for you. So now I'm going to try to get more interactive here. Maybe I'll try to get onto the venue list. But I'm going to drop, if you'd be able to maybe copy this to the chat on venue list, this is a website that's live now that allows you to compress this data. And so you can see it for yourself. So if we load up this Proj for JS definitions.json file, you can then download it and get it compressed. Yeah, so for small files, it's still super fast. But if you're doing like trying to compress OSM information on all the parts in the US, it will take about 30 seconds. So I'd love to do some live demos now with some totally random data people throw at me. So if people would like to post links to any JSON data they have, I'd love to sort of demo it and see if things go horribly wrong. Or if you have any questions, happy to answer questions. Thank you very much, Daniel. Was quite clear a presentation. And I think it was visible, your code and your examples. We have one first question about the different applications and use codes of this tool. Can it be used whenever we use JSON and GeoJSON? Yes. You'll want to compare it to your alternatives. But anything that's GeoJSON absolutely can be compressed by JSON code. And anything that's JSON can definitely be compressed by JSON and code. So my question is, for example, for services providing GeoJSON, can it be used? And can we compress it on the fly? For example, a WF-Server or OGC API server answering with JSON, can we change this JSON to code and receive the code on the client side? Wow, you just blew my brain. I did not think of that. But that sounds like it would definitely be possible. It's a super simple algorithm. So I know GeoJSON is in Java. But if someone wanted to rewrite it in Java, that would be, that should be, well, I shouldn't say super simple. It's not super simple. But you would be able to rewrite it in Java. And it could be, there's a security question. I don't, haven't fully thought through all of that. So if you're in a super high security environment, you might want to wait for some security testing. But for open data in other sort of trusted internal environments, maybe it would, there's no reason you wouldn't be able to. Yeah, but we could, even without changing GeoServe, for example, we could have some proxy in between that receives the GeoJSON from the server and compress it and will deliver the compressive version to the client, something like that. It would be a nice use case. So the, the transport would be a more compact file from the server to the client. Yeah, I think it would be a possible use case, but maybe we need to do some tweaks on the sites. Are people also asking about the decompression demo? Yeah, the decompression demo. Yeah, let me share that URL here. So it's, no, sorry, where was it? Yeah, it's here. You can go to www.geniojdu4.com. If anyone wants to try it and let us know how it goes or provide a URL to your JSON data, we can try it live. So, I don't know if there are more questions from the audience. No more questions. Daniel, thank you very much for your presentation. I think we will have you here in, in, in for another presentation, I think, right? Yeah. So thank you for being around. And if people have questions, they can use the, the chat and can use your, your links. So thank you very much and see you later. We'll prepare the stage for the next speaker. Thank you. So we'll start on time in about four minutes. We already have Krishna here, the next speaker. So in, in three or four minutes, we'll go live again. So I'll try to use this banner.
|
This talk walks through a new algorithm for compressing JSON data, including GeoJSON. The JSON-to-Code algorithm compresses JSON data by converting it using recursive variable assignment into valid code that generates the JSON data. No prior coding experience required as the talk is at a high-level.
|
10.5446/57426 (DOI)
|
Hello everyone. For our next talk, we'll have Moise Iancunda and Jeanne Garasi. Moise is a management information system specialist in Water and Sanitation Corporation. He has a bachelor's degree in surveying and geomatics engineering and a master's degree in environmental economics and natural resources management. Jeanne is a senior software developer specialized in GIS field with more than 10 years of work experience in Japan and Eastern Africa. His main interest is how GIS can collaborate with water supply management efficiently. He's also a main contributor in the translation of some QGIS documents, Q-field and input to Japanese. Together, they have made an expersentation. We will stream the video that Moise made and for the last part, if Moise can connect, he will be answering questions. If not, Jeanne will answer them. So now we will present. Thank you. Hello, I'm Kote Iancunda Moise. I'm the MS Specialist in Watercare. So about the title of our presentation is the implementation of first 4G QGIS Q-field and FETTILE for rural water supply management in Rwanda. Yeah, this is the general information about Rwanda. Our country is located in the center of Africa. So our capital of Rwanda is called Kigali. This is about our organizational structure for us. We have mainly two departments, urban water and sanitation services. But mainly for us, we are working in rural water and sanitation services to support this with private operator to manage water supply systems. What we do as rural water and sanitation services, just we are there for rural water and sanitation services for operating. We have the operation and the management unity. We have the diffuse water situation support for each district. And we are supporting districts, as I said, and also develop the development of operation maintenance, manual for each systems, management of mass report, mapping and management of water supply systems. And also we had the project called Rwassom, this one, where the sponsor was the JECA. So with the map, the formal do district. And also we are waiting for phase 2 Rwassom, which is about to start very soon. What we do need the water supply system, just for planning for 100% coverage of water access. For the government of Wanda, we said what you call NSD1. We have the target to reach 100% of water access by 2024. But also for SDGs, in the PRR6, we target to reach 100% water access by 2030. So it is really important to know the current assets and the location of existing water facilities to support the monitoring and evaluation of the progress towards the achievement of that golden goal in sub sector, also for informing the decision makers and also to help us for proper planning and also for improving operation maintenance activities. Here is implementation in your site for phase 1. We do the data correction in the formal do district. I will say for Rwassom project by JECA and then for phase 2, we do the conduct the mapping for the entire country and the JECA supported the introduction, the open source software in order to improve the data management. For phase 3, we do the vector tiles implementation or WebGIS, where they started the operation vector tiles as open source and open data. And that's since July 2020. For phase 2, there is data correction in the whole country. The structure for data correction in phase 2, for rural water plantation services allocated in water engineers for each decision to map, JECA also conducts the GIS and GPS training developed by GIS database. Also it takes 90 months for mapping. The tools used was the GAMNI GPS and also the software was the GGIS Q-Feed and Post-GIS. About RTBOS O&G and Kaliut GGIS, you have the Centropost GIS database. Also you have the available inventory report available for APANET data, but also there's an open source for Python scripts. This is the roadmap for GIS activity during phase 2. First of all, we did data correction, as I said, there's offline access, but also there's offline data updating. After we do the data cleaning, now we do the data updating and analyzing. This is the output for mapping. This is the mapping result in the rural area of Rwanda as of December 2021. For example, in water supply system, like after mapping, we have the 158 as May 2019, but as of today, we have 1,081 because we are updating data because there is the new water supply system which are constructed. As you can see, the household number is increasing, water cures, a chamber, pumping station, even the water sources. This is the system structure of phase 2. As I said, we are using the GGIS Q-Feed for data correction and also the DEMI GPS for data correction. Also, the WebGIS implementation was postponed because of several issues. This is our structure of our database. We have these tables as you can see, but for more information, you can copy this URL so that you can know more about our structure of our database in the post-GIS. The data correction by GGIS and Q-Feed. Yeah, we have the DEMI and Q-Feed, as I said. So here, what we send the data to this we support engineers in the district through the Google Drive. Then after updating, they send us through the Google Drive as the MIS specialist, they can update in the central geodata base. This case of GGIS data in phase 2, here we have the offline data accessing and updating. So you see this URL, you can copy it so that you can know how our working flow is. As I said, we send through Google Drive, they update it, then they send back to MIS specialist so that can be updated also in the central geodata base. Yeah, this data utilization in Q-Feed, you can scan this QR code using the smartphone to see how this data can be utilized in Q-Feed even by using the smartphone. So this Q-Feed is compatible with GGIS, it's also an offering source, it can work offline and Android. Automatedly creating inventory reports. Here also we create inventory reports using the Python script. As you can see, you can copy or set this URL to see how the work flow is done. So how this inventory rate is help us to know the status of each other supply system, even it helps us to create the operation manuals. It helps us to know the current situation of each other organ in the system and then you can plan how can be replaced or repaired according to the age and the physical status of each other organs. And also we are having the modeling water distribution system using Gepanet. We are having this Q-Water plug in QGIS. So after collecting all the data, even we are able to produce something like modeling to see whether the water can reach, know the velocity, know the Np control. So really by this, by this epanet there is a Python script where we can change the directory data from the post GIS to epanet so that you can proceed with some water modeling. Modeling water distribution system, as I said, this is a Python script that can be used. And also our Python script can link the data from post GIS to both epanet and the QGIS plug-in. And also QGIS has Q-Water plug-in. It is no user friendly to simulate water network, although it has only limited function. Yeah, phase three, we have the WebGIS implementation of vector tiles. What is vector tile? Vector tile is one of the most sustainable marketing distribution methods. It is very right on the first render map on browse as you can see on this GIF, it is flexible to change the starting of map. Operation cost is much cheaper than last time or other format. Yeah, also by looking at this demo of WSARC vector tile of QGIS, you can scan this QR code to see how our WebGIS is working. As you can see, we have data supply system. And also this step is too much. When you are in the field, you can nowhere, the pipelines are lying. You can know so the reservoirs and also other organs. So it's better for field, even you can plan when you are in the office without to go on the field because we are having almost the information needed for planning purposes. Implementation of open source WebGIS as a structure, we have this system was developed by former data expert, as I said, together with AMIS specialists, developed from Dracovax 2020. Tools, the vector tiles, also we have the GitHub pages, combine open source, open data in GitHub platform to publish the system for field of charge. As output, all spectra does can see the router supply network at the entire country. Automated to update vector type every week. Cost of operation and maintenance is free of charge. But of course we have some challenges, features, gyrus, cenota, utilities. So as administrator, just data is ready, but still has time to share and utilize data within utilities. About updating, there is no motivation to keep updated data because no one can use the data freely. About also, they have the limited access to gyrus data. Also, there is no way to provide some trainings. And also we need high level of skills required for using this gyrus. So gyrus, there is no way to provide some license. About the web gyrus, you have a kind of internet connection. Even there is no budget to buy the server. About the advantages of open source, of course, as in your circular department, we are using the open source. Gyrus administrator can concentrate more about the maintenance of gyrus database and also update it. There is no motivation to keep data up to date. There is full of charges, they each have. And also about the vector tiles, there is no access to gyrus. High gyrus is not required. There is also possibility to use the vector tiles as open source in order. As I said, this web just can be accessed through the smartphone. As engineers can enhance the submission management, as also the plan back and get gyrus information before constructing it or to go to the field. As meta reader can make meta reading more efficient. For vector tiles also can help us on a given order reduction. Customers can provide information to client and improve customer satisfaction. For managers, there is effective future planning as well as for briefing and sharing with stakeholders like government. But also the gyrus sector that can benefit from these gyrus, the sector, the private company can collaborate easily. There are various possibilities and the synergic effect of future development. There is also this map in the preparation features. As you can see, you can copy this URL for further information. But this also helps us because you can print some information or some images that can be recorded in like any document. But also when you are going on the field, you can take some like images or give map so that you can know exactly the location where you are going to. So really mapping the expectation feature is very important for us. About elevation and the future is implemented in December 2020 till 20 meters in evolution. And also this DMD elevation is owned by WASARC as per RITB terrorist. So this elevation as you know, this elevation information is very important for planning, purposes, water infrastructure. So it helps us like you can trace a line and you know, they start to input like water source and where you are heading to after seeing the elevation, it is very easy to see whether the water creates the place by this elevation information. But also besides the current future, it helps us a lot. Like in Rwanda, we have the policy that in the rural area people have access from the water point, like in 200 meters, there should be that buffer zone. So by this ice current future, you can click on the water point and to see the people who is allowed in the buffer zone of 200 from the water point. And you can see the people who are far from that water point. And from this ice current future, you can plan how the other people can access water. So synchronizing vector title as open Africa. Open Africa is platform to manage all open data in Africa countries operated by code Africa. So also this data people can know WASARC and the data are available on open data. But also there is a documentation of vector title creation tool. You can scan also this QR code by using the smart phone to see more about the documentation of vector title creation tool. Thank you for your attention. For more, if there is any question, suggestion, your flow, the flow is yours. Thank you very much. Okay so now we can move on to a couple of questions with Jin. I don't know if Moise Ganchi has. Can you hear us? Can you hear us? Yes, can you hear me too? Yes, yes. Right. Yes, yes. Okay, okay. We have a question here. It says does the 10 meter DEM come from satellite acquisition? Yeah, this DEM comes from, is taken by government of Rwanda by the IOPN. It's not from the satellite images. Yes, it was used for mapping the land parcel of Rwanda. Great. Thank you. There is another one. It says, great talk. What was the most challenging issue encountered in this project? The most challenging issue is that some people, they are not using these GIS data. So they are well developed and must be used for planning and management purposes. But in our activity, they are not using these data. And another challenge is the challenge of budget. Yeah, thank you. Yes, budget can be a really big challenge sometimes. It's sad, but it's true. We have another one. Do you think that similar projects could be implemented in neighbour countries and then merge everything in one big water net? Yeah, sure. It can be done. Great. It's like good prospects. Another one. You mentioned a lot of internet problems. Do you have good internet access to download and access all these softwares? Yeah, we don't have the strengths to internet, but that to this you can download some software and other documents. But our internet is not very good because we are still having the best fiber optics in our country. So it's not stable, but if you want you can download slowly and eventually it downloads. Yeah, sure. Okay. Thank you very much for answering the questions. And I'm glad that you could be here and we can hear you. Yes. I don't know how you manage, but great. Yeah, yeah. I'll wait a couple of seconds more minutes if we have another question. Then we can wrap up the talk. You are receiving some of losses and some of venues. Well, thanks a lot for everything, for your time, for your presentation. And we'll see each other around in the 4G. Yeah, sure. Thank you very much. Thank you very much. Do you want to add something, Jean? Yeah, there is no comment, actually. But actually for Rwanda's Wasak, this is the second 4G and last time in Bucharest, we also conducted a similar presentation. So maybe if there is someone here who watched our presentation in Bucharest, this might be interesting for you. We have... Yeah, what we achieved during these three years. Thank you for giving us this another opportunity to present in 4G conference. Thank you very much. It was a pleasure. We have one last question. Were there any issues using GitHub pages with vector types? I think you can assist. Maybe I can answer. I think generally there is no issue posting in GitHub pages, but maybe sometimes there is limitation of GitHub pages of one gigabyte. So in that case, maybe we can use Netlify or other hosting service. Thank you. Okay, well, thanks again. See you around. Okay, see you. Thank you. Thank you. Thank you. Bye-bye. Bye-bye. Bye.
|
Water and Sanitation Corporation (WASAC) developed GIS system for rural water management by using FOSS4G software together with Japan International Cooperation Agency (JICA) since 2018. WASAC conducted the data collection by using QGIS and QField, then offline data sharing became available for all over the country of Rwanda until the JICA project ended in December 2019. Our achievements of the project was presented in FOSS4G 2019 Bucharest (see video). Although JICA project ended, WASAC still continue developing more advanced vector tiles’ based Web GIS system with former JICA expert - Jin IGARASHI. Our new online site is available here. Now all of our stakeholders can browse water supply data in Rwanda. We established this new web service for free of charge by using Github pages because of budget limitation. We also make our vector tiles data available as open data at Github and openAFRICA. This new vector tiles project was also developed under the technical support of The United Nations Vector Tile Toolkits.We would like to share how we are using this application to manage water and how we want to develop in the future.
|
10.5446/57427 (DOI)
|
Okay. Hello, everyone. I'm Forza Sanz and I'm today with Thomas Naerik and we are going to give you an overview of the Elastic Stack Use Special Capabilities. The agenda for the talk is first we will discuss a little bit what's the Elastic Stack and what are the products that are in it. Then we will see what are the different options and the different products that we can use to ingest use special data into the database, into Elastic Search. Then we will see how we can query the database, how we can search, filter, aggregate information from the Elastic Search cluster. And finally, how we can use those queries to visualize that data at scale using both Kibana and other products that may be working on top of the database. So let's get started. So what's the Elastic Stack? Elastic the company develops three families of products for three different verticals for enterprise search, observability, and security. But they are all created on top of the same single stack. The main component of the stack is Elastic Search, the NoSQL search engine that helps you store and analyze large amounts of data. There are a family of products to ingest data in your database. And finally, Kibana, it's a web application that helps to visualize, explore, and even manage the full stack. All these components can be deployed on your own premises or you can leave that to us in our own Elastic Cloud. What makes Elastic Search special is the ability to scale. You can have hundreds of nodes or you can have one single node in your cluster. And so it will grow and or gets smaller depending on your needs. Elastic is also very proud to have a large community where all the development is happening in the Elastic Organization in GitHub. And we have also a Slack community where you can chat with other developers and users. And finally, for me, the most interesting one is the Discuss Forum where you can reach out for feedback. You can help other people to troubleshoot their issues and in general to discuss about the different components of the stack and the different solutions. So let's talk about ingesting data into Elastic Search. First, we have the Beats family. That's a family of small pieces of software that only do one thing. They are all created on top of the same library, which is called LiveVit. And you have Beats for reading files or for gathering metrics of your system for inspecting the network data that is traveling your infrastructure or to see the Windows events on your servers. All these Beats gather IP addresses. So those IP addresses can be converted into locations as we will see later. But more especially, FileVit and MetricVit are interesting for the Geospatial use case because FileVit can read CSVs. So it can convert columns on CSVs into geometries, mostly points, of course. And MetricVit has an HTTP module that can understand JSON outputs. So you can plug an API or your own systems into Elastic Search using this Beats. As I mentioned before, this is also a library where you can create specialized Beats. And this is an example of an open source or community member that created a bit that will ingest in the Elastic Search USGS earthquake API results. Then we have LogStash, which is more like an ETL process. So it typically runs in its own server. And it will connect to different inputs. It will connect to other Beats. It will connect to databases. Or it will more or more normally connect to log files and IoT sensors, CSVs actually as well. Then it will, through filters, it will extract data. It will enrich that data with, for example, if it's an IP address, you can convert that data into locations. It will run other processes to manage and convert that data into something that is ready to use in Elastic Search. But it can also ingest data in other softwares as well. In Elastic Search, we have the concept of an ingest pipeline. It's a dedicated process to receive, as the data is received by the database, it can be transformed to be stored in a more convenient way. It can even run in its own dedicated nodes. And each pipeline is made by a set of steps where each step is using a processor. The processor can convert data between types, can remove fields, can run the GUIP process to create new data based on the IP address. And it can even enrich documents based on another index with reference geometries. So this is actually a reverse geocoding process, as you can see in this blog post that we linked here. On this screenshot that we have here, we have a couple of highlighted steps. One is where we are converting the columns from strings as they come from the bit into numbers into doubles. And then those two numbers are set as a single entry for GeoPoint. Out of the Elastic products, we have other, for example, OGR to OGR, which is a well-known ETL command line tool that will help you to convert data from many different sources into Elastic Search, for example. So you can read and write Elastic Search indexes. And we have here a couple of blog posts where we help you to ingest data from OpenStream app or your own shapefiles or whatever into Elastic Search. Then we have a couple more tools that are inside Kibana. One is the CSV upload. This is the inside the data visualizer. We have a tool to where you can select a local CSV. It will analyze the different columns. It will even automatically detect columns with coordinates, with latitudes and longitudes. And it will give you a prepared pipeline. So you will only need to select the name of the index. You will be able to modify all these settings, of course. But if you decide to go through this, it will create all the different components, upload the data, and even create an index pattern, which is a Kibana object. Then you can view this data in Discover, a tool in Kibana to explore indices. And if you click in the location geospatial type, you will go through into the Elastic Maps application. And you will be able to see the new data directly. Then we have the geogation upload, which is another tool to ingest data. This one is living in Elastic Maps. So from here, you can upload from your local system a geogation file. And you only have to decide the name of the index. And it will transfer that information from the browser into Elastic Search. So it will be ready to be used here in Elastic Maps or in any other tool in the Elastic Stack. OK, so now we have the data inside Elastic Search, what kind of processes or queries we can do with this data. First, we need to understand that we have a couple of geospatial data types, geopoint and geoshaped, the first one only for points, and the second one for all types of geometries, including even envelopes and circles. And then we have a more specific data type called shape that is only for plane or Cartesian coordinates. The first thing you want to do with documents that have a geometry is probably to filter them. You can filter them by the bonding box, bar, a point and a radius. Maybe you want to draw a polygon and get the data that is inside that polygon, or you can even use an existing geometry from another index. This is what we use in Elastic Maps, where we can draw a shape. This shape will be converted into a filter definition, as you can see here in the screenshot, where you see that this filter has been converted in the JSON notation for being used in all the queries against that data from this moment in Elastic Maps. The next thing to do is aggregate the data. There are two types of aggregations in Elastic Search. One is packet aggregations and the other is metrics. Packet aggregations are those that put all the documents that share a common criteria. So for example, you can say, okay, I want to gather together all the data that is closer to one kilometer from this point. And then those that are from one kilometer to two kilometers in another category, from three kilometers to five kilometers in another category. You can do the same for the GeoHatch function. The demo that I'm going to show you is using the GeoTile. This is the Web Mercator tile schema, the ZXY. So you can packet data, documents from an index in those tile definitions. Here we are gathering service calls from New York City and we are putting together in different buckets where we are counting the number of service calls, but we can also do other metrics. For example, we can say, okay, I want to know which is the most common complaint from those service calls for every bucket. So that is added to the result of your query. So you can use it in Elastic Maps for, in this case, Coroplet mapping, where you can render different colors based on the most common complaint for every square or every tile. The other aggregation type is Ametric Aggregation. This is the aggregation type that generates a new geometry. So there is a derivative geometry coming from that aggregation. So if you aggregate data, you can get a centroid or you can get the bounding box that gathers all the data together inside, or one interesting one that we recently added to Elastic Search is the GeoLine. That is, if you have a set of vehicles, for example, and you want to create tracks for the different vehicles, you can use the GeoLine aggregation. So here we have a demo where we have the last post position and all the positions on Portland City from a real-time API, where we have this data in Elastic Maps and we are going to render the tracks from each bus. So we select the index, we say, okay, we want to split the lines by the GeoQual ID, and then we have immediately that layer in Elastic Maps. From that, for example, we can say, okay, I want to only render the line based on one of the vehicles. So you can click in the GeoQual ID and use it as a filter, and you will get only the line for that vehicle. So now that we have the data in Elastic Search and we know how to query it, let's see how we can see it. One thing is worth noting is that Elastic Search will always output your data in a JSON form. That means that you can only get up to 10,000 documents in a single search query, and if you need to do more than that, especially if you are in a real-time scenario where you are getting new data in your index, you need to freeze the search context up to a certain time so you have time to retrieve all the documents in a consistent way. There are many open source projects that are being able to interact with Elastic Search, and I wanted to highlight here a couple of them, GeoServer and PyGeoAPI. Both of them will expose data as OGC services, and they will all understand this JSON output from Elastic Search, and they will query for aggregations or raw data, and they will convert that JSON output into OGC compliant responses, like encoded images in a WMS service, or GeoJson in a WFS or a W OGC API features endpoint. Then in Elastic or in Kibana, we have Elastic Maps, which is this user interface that we have been seeing through the talk. It's a Mapbox JL application, or now we are migrating into Map Libre. It will work with both GeoPoint and GeoShape data types. It will perform as we've been seeing aggregations, and it helps you to start your database on the values of the indexes. You can draw new features, you can filter, you can measure, you can use it alone as we've been seeing in the case, but you can also embed that map in dashboards or canvas in Elastic and Kibana. Finally, this is also a component for other applications, for example, in machine learning. If you see a map, that's an Elastic Maps component. Elastic Maps is actually the geospatial window for the whole stack. Because of that, and because we have a lot of experience on how to render large amounts of data, we wanted to share the three strategies that we use in the application to render information or raw data. First, we have the single request, where you can only see up to 10K, so we need to inform the user that he or she is not seeing the whole content of the index for that viewport. But if you zoom in, probably you can then get the whole picture, because there are less than 10K documents in that viewport. Then we have a new layer type, which is called blended layer, where it will automatically promote from seeing only the raw documents to aggregate into cluster if that viewport is rendering more than 10K documents. So if we zoom out, automatically we will move from individual documents to clusters. So you can see the whole picture without losing any data. And finally, we have the vector tiles. This is an improvement from the first type, because we can render up to 10K documents per tile. And also we can cache those results per tile in the browser. So if you can render more than up to 10K documents in a single tile, as we can see here as soon as we approach, then you can sleep and pan around, and the data is reused, and only the new tiles are retrieved. Okay, so a couple of new features in Elasticsearch and Geo. The first one is vector tiles. As we discussed before, Elasticsearch was only able to output or producing JSON responses. That's not true anymore, because now we have a new endpoint that will produce proto buffers. So it will be able to extract and response very compact chunks of information, of your data, of your special information. So you will be able to create filters and aggregations and queries that will be able to be transported to your clients super quickly. This will, of course, also replace the current vector tiles implementation that lives inside the Kibana server. The other big feature that we wanted to highlight is runtime fields. Until now, Elasticsearch was an schema on write data. It means that you have to decide your schema or your mappings when you were creating the indices and when you were writing the data. That's not true anymore, because now you can create new data or derive or generate new computations at query time when you retrieve the data. So as more closer to what you think you can do with a relational database. That's in the geo world. That means that now you can, from geo-shaped fields, you can get the centroid, height and width. You can expect that we will be able to perform way more analysis or functions on the geo-shaped fields in the near future. So to finish the talk, we wanted to give you a call to action. So if you found interesting anything on this talk and you want to give it a try, please download the stack or start a cloud trial, then you can ingest your data using OGR to OGR or GGGSNAP load to get your data in Elasticsearch. And then you can use Elasticmaps and the rest of the Kibana tools and visualizations to explore and visualize your data. And please share after that your feedback and questions and issues using the discuss forum or if you prefer, you can also drop us a line to our own emails and we will be more than happy to give you a hand. So that's it. Thank you very much. I hope you had a good time today. And now we are more than happy to answer your questions and comments. Thank you very much. Excellent. Thanks, George and Thomas. That was some very interesting and exciting news on the increasing geospatial capabilities in Elasticsearch. So it's great to see. We have a number of questions. So why don't we get going? First question is, which CRS is supported in Elastic? And is it possible to support other CRSs than WGS84 or Mercator? So yes, I can take this one. Elasticsearch is geofield types. Only support WGS84. So latitude, longitude and all the distance calculations happens on the test ferroid. The display of the Elasticsearch in the maps application is only in the WebPercator reference. To store different reference systems, you can use a shape field if your data is projected. So if it's in either meters, you can use the generic shape field Elasticsearch to store. Great. Next question. How do you recommend storing data in multiple languages through Elasticsearch? I can take this one. In Elasticsearch, there is a concept of analyzers that as you store the data, it's analyzed to be indexed. One of the types of analyzers are language, specific language. So you need to define fields for every language that you have. And then you will use in the mapping or the schema of your index, you will set up which language is on each field. Next question. With a new license for Elastic, are we allowed to use it within a FOSS based SAS running as an index mechanism within another application? So yes, that is allowed. I can take this one. Go ahead. Go ahead. Okay. I'll do it. As you probably know, the license forbids the use of software as a service for Elasticsearch on Kivan. But it's perfectly fine to use it in your application or your software as a service because you are not selling or offering Elasticsearch itself at your own price. So it's perfectly fine. Great. What is the difference between Elastic and GeoMesa? I think there are totally different products. GeoMesa, if I understand well, it's an engine to process just special data or big data, whereas Elastic is a generic, no-SQL database. Of course, analysis can analyze the data, but it's meant for a more generic audience or use cases. And we're talking about Elasticsearch in particular. Yeah. Next question. What kind of things can we do to reduce the RAM usage of Elasticsearch? Do you want this one, Thomas? Yeah, I can take it. I'm probably not the best person to answer this, so I would suggest if you can reach out to a soft line and we can find you somebody who can help you with that. We can share some resources already on tutorials and blog posts that discuss this issue. So I'll paste them here in the chat to share, but I would suggest you to ping a soft line and we can point you in the right direction. Next question, for big layers, can you split single queries between different charts? Yeah. When you configure an index, you configure the number of charts that your index is going to be split, and that depends on the number of nodes that your cluster has. As you know, Elasticsearch can have up to hundreds of nodes. So when you make a request, Elasticsearch is going to take care of retrieving the data from the different charts and it's going to combine that together as a result. Actually, there are also other features like cross-cluster search where you can actually expand this concept to even different clusters. So yeah. And another question, a few years ago, I had some problems when I needed to count the number of exact, the exact number of features I had in a layer. Is this still a problem? So I guess maybe this points at two parts of our stack. So because the question mentions layer, I assume it points to the layer in the maps application. So when you bring in an index as a layer, the count displayed there is the number of features that are in the list. And that is approximated. And that will start to come into play when you have a large index with a lot of features that it might not be the right exact number. So that is sort of the default behavior of Elasticsearch. And it's default because the idea is that Elasticsearch, you have a lot of features for documents. So that is not changing the map application so that display is still there. Elasticsearch just have APIs to fetch the exact count. It's just not serviced in the map. Yeah I think there's also a setting which for indexes with over 10,000 records, there's a track total hit setting of some sort which you can use to get the exact count back. Yep. And that's actually used, that setting specifically is using the maps application to actually speed up our queries. So we know that we cannot fetch more than 10,000 documents so we're not tracking counts beyond that necessarily when we push some queries which is a big performance improvement because it allows the search, in Elasticsearch to cut off as soon as possible. So that's basically the trade-off that we make there between search performance and showing the exact count. So yeah Tom that's exactly right. Cool. Great. Sounds like there's been some considerable efforts around geospatial. Maybe a question for myself. We constantly, where I am, we constantly have requirements to analyze our logs and put them through Kibana and so on. And our logs are of WMS requests. So imagine get map requests with the query string, the layer equals and we want to find out which is the most popular layer that people are asking for. So that's not necessarily an endpoint but it's a parameter within a query string. Any thoughts or guidance on how we can plug that into an ELK stack, that type of reporting, assuming we have the Apache logs? I don't know if you want to take this one. I think my recommendation would be to write log stack scripts that will take that out basically. And there's quite a few plugins that will help you parsing, for example, query parameters from a request. So it is possible. And the other one is the ingest pipeline. So that's something that Jorge mentioned during the presentation. As you index your log, it will lift out the URL and then use a pipeline script to retrieve the parameters that you're interested in. So it is possible that it's being done and it's usually done by ingest pipelines or logs. Excellent. Another question here. What is the main use case to use Elasticsearch for Geo? It's a pretty generic storage. Traditionally focused on point data, but recently we can store as well polygons and lines. So use cases are not focusing a single scenario. Traditionally the Elasticsearch is used for observability and security. So you have plenty of resources and documentation on how to parse logs, as you were mentioning with the WMS request, but also other type of security related information. But you can also use it for tracking any IoT data sets or sources. Anything that is needing to a large storage, that makes a lot of sense with Elasticsearch. Especially since lately we have supported the concept of data tires where you can push, let's say you are storing a large amount of data as a stream, but you only want to have close to your users the recent data and automatically archiving all data into cold storage or even frozen storage, but still being able to use it and retrieve data from there and doing that, obviously giving them time to retrieve. So this is a very cool use case for Elasticsearch where you can offload your hot notes from this old data and still maintain the same architecture without having to replicate clusters or having to perform any other manual archiving systems. So that's a pretty interesting use case for Elasticsearch. So I just, yeah, I agree with that, I just wanted to add on to it. Just as an example of the IoT use cases, that's actually when we see usage of geo in Elastic outside security and observability, so outside sort of this idea of tracking your cybern<|id|><|translate|> structure, it's really an IoT and a really big example would be there's something like GPS tracks where each document is an update on some sort of entity that moves through space and time. So you're saying Elastic,'
|
Elasticsearch is a well-known NoSQL database to store and process large amounts of data, including geospatial types like points and polygons. There is a broad ecosystem of tools to help to ingest data into Elasticsearch, including some usual geospatial suspects like GDAL. Finally, Kibana is the goto solution for visualizing and making sense of all the data stored in a cluster of Elasticsearch nodes. In this talk, we will explore two archetypical use cases: on the one hand, archiving and processing information from moving or static events and, on the other, working with large amounts of data associated with a small set of asset locations. Examples of the first scenario would be tracking vehicles or databases of urban events like crime or inspection locations, while the second could be weather data archives or IoT systems. We will see how to ingest and enrich geospatial data into your cluster with these two use cases. Then we will explore some of Elasticsearch's geospatial capabilities. Finally, we will see how Kibana applications like Maps, Lens, Canvas, or Dashboards can help you visualizing and understanding your data. Finally, we want to take the opportunity to show to the community some of the latest developments made by the Kibana Maps team, like our new alerting capabilities or the support for vector tiles in Kibana and Elasticsearch.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.