doi
stringlengths 17
24
| transcript
stringlengths 305
148k
| abstract
stringlengths 5
6.38k
|
---|---|---|
10.5446/52520 (DOI)
|
Hi guys, I'm Dong from Shanghai Jiao Tong University. Glad to be here to introduce the Ponyi project which is a web-fiable and scalable TE system on this file. Referring the definition from the Wikipedia, a glib or trusted issues environment is a secure area of a main processor which guarantees code and data loaded inside to be protected with respect to confidentiality and integrity. Overall, it has two major functionalities. First is the remote attestation. It means a client can attest whether a remote node is attacked and cleave running legal code. And second is the isolated environment which prevents untrusted software and hardware to access a cleave's data. Therefore, the major capability of a cleave is to allow data only to be transferred among attested and untrusted nodes. There are already many TE systems including Intel SCX and TDX, AMD SEV, ARM TrustZone, RISVAL, Kstone and Ponyi. Some cloud vendors have utilized these systems to protect their security sensitive data. For example, Microsoft proposes confidential computing based on SCX. Amazon proposes neutral cleave for security and recently Google Cloud adopted the AMD SEV to design the secure VM instances. We also have a confidential computing consortium including many companies like ARM, AMB, Huawei and AliCloud. In this talk, I will introduce the Ponyi project which aims to design an unclear system with verifiable security and scalability. Overall, Ponyi includes three layers of the design. In the bottom, it proposes new hardware extensions like the S-Mode FLCM protection design SPMP. The middle is a security monitor which is a lightweight software firmware in RISVAL machine mode. The design of the monitor is formal replacing oriented and builds on the scalable hardware isolation mechanism. The monitor is responsible for remote attestation, runtime management and isolation. In the top, we have built several secure runtime frameworks which is compatible with existing frameworks. Currently, we can support ARM, TSC and global platform interfaces. This makes developers easy to port their secure applications to Ponyi. This is the overview of the hardware software code design in Ponyi only. For each CPU core, the monitor utilizes MPMP and SPMP for fine grid isolation. To defend against malicious devices, we adopt the design of IO-PMP which is configured by the monitor to restrict devices' DMA behaviors. Currently, the software design of Ponyi is based on the RISVAL pre-read spec version 1.10. It includes two components in the software, the monitor and the on-leave applications. The monitor is responsible for SPMP, PMP and IO-PMP configurations. PMP is a standard isolation mechanism in the pre-read spec in RISVAL, while SPMP and IO-PMP are two new hardware features proposed by RISVAL T-group. The monitor utilizes this mechanism for isolation, on-leave management and other staffs. The on-leave applications are for executing application tasks. On-leave applications can be further classified into host on-leave applications and secure on-leave applications. The host on-leave applications execute security non-sensitive tasks in the rich exclusion environment or untrusted world, while the secure on-leave applications execute security sensitive tasks in the TE. Ponyi provides a set of secure on-leaves for secure storage, encryption and other secure functionalities. The design is targeted for both MAMU devices and non-MAMU devices, and will adopt formal methods to enhance the security of our software TCP, the monitor. Now I will first introduce the overall architecture of Ponyi on MAMU chips. In the untrusted world, the host on-leave applications are drawn in the URL mode, and then can be shown as like the Linux is drawn in the supraware mode. The monitor provides a set of on-leave operations which can be invoked by the untrusted software and secure on-leaves. The host applications invoke the monitor calls to the driver in the OS kernel, while the on-leave applications directly invoke the monitor calls. The monitor provides basic functionalities to manage on-leaves and the mechanism to allow communication between host and on-leave, and between two on-leaves. The isolation between different entities is achieved through both SPMP and PMT hardware features. In the case of non-MAMU chips with only M and U modes, we run the RTOS, untrusted applications, trust applications and secure services, or in the URL mode. The monitor utilizes PMT to achieve isolation among untrusted software, trust applications, and others. Different entities communicate with each other through communication mechanisms provided by the monitor. As the RTOS is drawn in the URL mode, we also emulate some previous instructions in the monitor to avoid significant modifications on the RTOS. In the case of non-MAMU chips with M, S, and U modes, the RTOS can run itself on the supplier mode. Therefore, the monitor does not need to emulate previous instructions used in the RTOS. Other components are similar to the previous case. PONLINE is designed to be web-fireable. This is because PONLINE only relies on hardware to provide basic primitives like fake-memory isolation, catapulting, and others, while the software monitor is responsible for implementing all other things like the agglomagement and communications. Therefore, the software monitor is the only software TCP which is security sensitive. We decide to use formal methods to guarantee its security. PONLINE proposes a formal web-facing framework, Pangolin, based on server. It allows us to formally define the specifications of monitor's functionalities and verify the functional correctness as well as other higher security properties. The system utilizes model checking and symbolic execution to achieve automatic verification. As shown in the figure, Pangolin will receive the binary of PONLINE, which is an implementation, and its specification as the input. The specification includes both the functionalities and other environments to describe security properties. As a result, the framework will tell us whether our implementation satisfies the specifications. As formal web-facing requires significant efforts and usually how many limitations, we have made many design choices to make the system easy to be verified. Here, we list some cases. The first one is a big monitor log. Although PONLINE supports multi-core scenarios, we do not use frequent locking in the monitor now. Instead, we adopt a big log which is locked when a core enters the monitor and unlocked when it leaves. This is because web-fine concurrent behaviors are very complex. The big monitor log can significantly reduce our verification efforts. Besides, previous research work has already proved that the big log will not affect the performance in the case of tiny system software. Second, we eliminate or restrict most of our unbounded loops in the monitor. This is because unbounded loops will cause many performance issues for automatic web-fine frameworks. Last, we define our interfaces in a monitor core to make them web-fine friendly. One case is that all the pointers are restricted to a specific region which can significantly reduce our efforts to model the point behaviors. Here are our results of verification. Currently, we have verified the functionalities of boot process, inter-uncleave communications, and help functions to manage enclaves. Next, we will further verify the enclave management interfaces, forking, and other modules. Next, I will introduce some secure functionalities provided by PONLINE, including memo installation, inter-wrapped isolation, secure storage, and secure usage of peripherals. PONLINE utilizes SPMP and PMP for memo isolations. Both of the two designs are for FAC memo isolation. Normally, PMP is used to isolate the untrusted host and enclaves, and to isolate enclaves with supervised mode software, while the S-mode PMP is used to isolate user-level enclaves. Besides, combining PMP and SPMP will achieve more regions than PMP. As PMP is already well known and standard isolation-mortennium-use file, in the next, I will introduce more about the SPMP. SPMP means the S-mode PMP. It is originally proposed for IoT devices. One of the motivations is that the IoT devices are usually MU less, so there is no isolation between S-mode and U-mode. Therefore, it is desirable to enable S-mode OS to limit the bulk addresses accessible by U-mode software. As a result, SPMP is introduced to provide the isolation between U-mode and Super-I-mode in the case of MU is not available. SPMP is a similar or canon like PMP. The major difference is SPMP is managed by the S-mode software. SPMP can be used to isolate FAC memory used by the U-mode applications. As shown in the figure B, we can use PMP to provide coarse-grained isolation and further split a single PMP region using SPMP to achieve fangrant isolation. The address matching of SPMP is the same as SPMP. The major difference is that we do not have the locking bit in SPMP. Instead, an S-bit is introduced. When S-bit is set, it means the region is for S-mode, or the memory accesses by the U-mode applications will trigger fault while S-mode accesses will be checked during the red-red X-cult permission bits. When S-bit is clear, it means the region is assigned to U-mode, or S-mode accesses will trigger faults as we enable SPMP by default. The U-mode accesses will be checked during the permission bits. Besides, the priority and matching logic of SPMP are all similar to the PMP design. The detailed design is in the RIS file TE test group's wiki. Please refer the document for more details. Next is the interrupt isolation mechanism. The goal of the mechanism is that the interrupt should only be available to target unclear applications. As different controllers provide different granularities on interrupt configuration, like the PLIC can only configure whether external interrupts should be directed to S-mode, while CLIC could configure whether individual interrupts should be directed. PONLINE proposes different isolation mechanisms for different controller designs. For the platform-level interrupt controller, we configure the external interrupts to be always trapped into the M-mode. Therefore, the monitor in the machine-mode is responsible for the interrupt's redirection. The assignment of interrupts are recorded in a configuration file, which is managed by the monitor. After receiving an interrupt, the monitor will acknowledge the interrupt and jump to the handler of the target unclear. Core local interrupt controller allows redirecting specific interrupts to S-mode. Therefore, when the monitor switches to a new unclear, all the interrupts assigned to the unclear are configured to be directed to the S-mode, and others will be trapped into the monitor. This achieves better performance than platform-level interrupt controller, as only interrupts are not belong to the running enclosures that are trapped to the monitor. The third feature is the secure storage. PONLINE provides secure storage by using a service enclave, which contains functionalities like encryption and file system. The storage enclave guarantees privacy and integrity protection and data. It supports both global platform API and PSA API. Enclave applications can choose different APIs according to different scenarios. The communication between enclave applications and service enclaves is achieved through the interenclave communication mechanism provided by PONLINE monitor. The last one is the secure usage of peripherals. In the CPU core side, the PMP and SPMP restrict the memory mapped IOL operations by each enclave. In the device side, we choose to utilize IOPMP, which is configured by the monitor to restrict DMA operations. The imitation of managing IOPMP is still in progress. Here is the representative scenarios, secure communication. Applications in both non-secure world and secure world can communicate to each other in a secure way with PONLINE. PONLINE SDK provides functionalities to allow communication between enclaves and untrusted applications and among enclaves. And it supports mainstream crypto algorithms for encryption, description, hashing, integrity and other purpose. The communication mechanism also supports both the PSA and global platform APIs, which can be flexibly used by different clients. To summarize, this talk introduced PONLINE over a viable and scalable T-System. It is based on formal or question-oriented design with our PANGLIN framework, and the system is scalable using new hardware features, achieving up to 1,000 concurrently instances in all platforms. We have provided a set of security functionalities for users to use, and most of the systems are already open source. Thanks. All right. I think this means that we are already live. You have started to answer lots of questions in the chat already. I think the most discussed point was the form of verification. That is also what I had in my notes here. Maybe just to add to what you already wrote in the chat, and maybe just to complete the picture. You already said that you are using a serveral with model checking and yes, model execution. So far, you have mostly been focusing on functional correctness. Maybe just to complete the picture here, what specifications are you actually doing? What properties are you verifying? I guess, really, really open source and can we take a look at this at some point and what are your experiences here? Yes. Currently, we are still focusing on the functional correctness. But there are some useful security properties. They are already in the server, like the non-fearance. We are planning to introduce this in our applications. But since the first step to verify a system is to make sure it is functional correctness, we are still in this state. We do have some experience within the framework like server. The benefit of server is that it is automatic. It means that the only thing you need to do is to write the specification and the framework will do the verify stuff. But the third thing is that this framework is not production ready. So it means you need to extend this framework according to your needs. Actually, we have found many, many bugs. And in the early stage, we can ask the developers of server. They can provide some help. But in the late stage, there are many trigger bugs that we must solve these bugs by ourselves. So I think it still needs many efforts to do the work. All right. So, Musa also just added the question which model checker specifically is used. Just to complete this. Component? So which model checker specifically is used? It's also the question. Well, it's just right here. All right. So I think this mostly, at least this answers most of my verification questions. If you have more, please ask them in the chat. But also please make sure, because I think we will drop out at any minute soon, to join our Q&A and join our chat if you have more questions. So let me just scroll through. So you also had a comment on Keystone that I found quite interesting. So can you maybe just reiterate your response to that? So the question was basically, in Keystone to get enclaveed like SGX, you are kind of limited to an EPC of one megabyte. So what's your solution here and your situation here? Yeah. So actually, the difference between the SGX and this file, Keystone provides that SGX has an encrypted memory. So it has a memory encryption engines in the hardware. So in this case, it means that all the memory in the EPC is encrypted. So it can defend like fake memory attacks. But in this file, since the most used hardware features is the fake memory protection, the PMP. It only provides isolation, but it does not provide integrity and the memory encryption. So it means that if we want to defend against the fake memory attack, we must use a scratchpad memory, as mentioned in the descriptions. And this scratchpad memory is very limited. It is much less than the SGX EPC. So I think the best approach to solve these issues is to introduce memory encryption engines in hardware. Yeah. I think that's the best solution. Actually, we have to some experience like in the GIMP file and in the FGA to add these engines and to compare with SGX. And the results shows that if we can have these engines, it can overcome the limitations of the encrypted memory. Yeah. But I think there's still a long way to introduce such models in the hardware. All right. So there's another question or several other questions that we didn't answer yet. But we just got to one minute of the morning. So let's just stop with a short question and then leave the rest for the...
|
Emerging applications like artificial intelligence and autonomous car require high security-assurance, which stimulates the wide-spread deployment of trusted execution environment (TEE). However, prior enclave systems are far from the ideal for three reasons. 1) Scalability: only support limited secure memory or limited number of instances; 2) Performance: not well-fit the requirements of high-performance application, e.g., poor secure communication performance; 3) Security: many still have security flaws, e.g., suffering cache-based side channels attacks. Penglai-Enclave is proposed to overcome these challenges. The Penglai open-source project aims to build a scalable and efficient TEE system based on RISC-V, which is made powerful through hardware-assisted scalable physical memory isolation extensions. Our evaluations show that Penglai can achieve more than 1,000 concurrently running instances even in a resource-restricted device. We also have supported libraries like ARM PSA on Penglai to ease the development of trusted applications, and applied formal methods to validate its software TCB.
|
10.5446/52521 (DOI)
|
Welcome everyone, my name is Sepideh Puyangral. I'm a PhD student in the Distineter Research Group at KU Ruben and I'm happy to present a recent work about an open source framework for developing heterogeneous distributed Enkelev application. These days, we built a smart environment that consists of several heterogeneous components and the application is distributed among multiple components. So all of them work together and exchanging information to each other over an untrusted network. This example could be like a health-scale use case from a smart environment and you see it contains a couple of sensors from heterogeneous vendors and some cloud processing, cloud recording of data on the other people's computer and in the end some health-scale professional might want to look at the data. So how we can make sure the interaction between different components is secure and a professional is looking at the right data and the whole system is kind of end-to-end secure. What we are presenting today is a framework to build secure application based on the terrestrial execution environment in all of these processors so that the doctor can be sure that the data they work with comes actually from an attested sensor and an actuator at the patient's home is guaranteed to only react to an interaction from doctor's computer. And finally, when you build this application you don't want to worry about the differences in the attestation mechanism or the properties of the communication network. Terrestrial execution environment is a relatively new technology in the modern processors that allows to run an application in a hard way protected environment called Enkelep. Enkelep are isolated cryptographically from the rest of the world to protect code and data from being accessed or modified by other entities. Besides, a mechanism called remote attestation is used to obtain remotely a proof that a running application is loaded inside the right enclave and wasn't tampered at the loading time. In TEE, all isolation and authentication primitives are implemented in hard way. That means your trusted computing base shrinks to the application module and a tiny layer of the hard way. Of course, as a result, the attack surface is reduced as much as possible. So here on the left system, an application needs to trust its code, the underlying operation system hypervisor and the hard way. In this situation, you know any vulnerability found anywhere in the platform might influence the security of the application. But in a hard way only TCP system, we can protect an application from a compromised operating system. And we can isolate a single piece of software so that really no one can interfere with it in an uncontrolled way. One implementation of TEE is Intel SGX. In SGX, the protected parts of an application are placed in Enkelep to be completely isolated from direct access. So the application will create this Enkelep and only enter the Enkelep during execution through something like a call gate. So SGX can ensure that non-Enkelep code cannot access Enkelep Pages in memory. Specifically, a portion of the main memory is reserved for the Enkelep and the Enkelep Pages outside the reserve memory are encrypted by a hard way unit called Memory Encryption Engine to protect from under-austed system software and cold boot attack. In addition, to confidentiality and integrity of code and data, other features are provided by SGX like attestation and data ceiling to Enkelep and authenticate the data before storing it on disk. Another very common TEE is Terasone that is implemented by ARM and currently used in a large number of smartphones. In ARM with Terasone, each of the physical processor cores provide two virtual cores called Normal Word and Secure Word. Hardware-based mechanism ensure that resources in the Normal Word cannot access Secure Word resources. To enforce this isolation, NSB is used to communicate the security-estate bitfing components. And to perform a context switch between words, the processor first passes through the monitor mode which serves as a gatekeeper via an interrupt external abort or explicit calls of the SMC instruction. And the address mapping in the Terasone MMU can be configured independently for each word. Also, IO devices can be dynamically configured as a secure or non-secure and for each interrupt, the generic interrupt controller can designate the word to handle it. Another interesting feature in Terasone is Secure Word. That is a change of trust is formed by verifying the integrity of the second stage bootloader and trusted OS before execution. This ensures that nobody has tampered with the operation system code when the device was powered off. And the last TEE we focus in our work is Senghuz. In Senghuz architecture, what we need is a simple memory management unit that implements the isolation techniques and a cryptographic unit that implements all symmetric crypto and hashing of operation system. Here you can see an overview of Enclaves on this Senghuz platform that have a continuous address space. In Senghuz, we have a code region that is associated with an Enclave and then we have a data region that is associated with the same Enclave. Senghuz uses a program counter-based memory access controller to ensure that the data of an Enclave is accessible if only the program counter is on the code section of the same Enclave. Therefore, the only way to enter the Enclave is through the entry point functions. And in the case of Senghuz, we have a node key that is encoded in the platform and from that we derive a key that is associated to each Enclave. So we basically hash the take section and the address of the data section with node key to derive an Enclave key that identifies how an Enclave is specifically loaded on a particular microcontroller. Moreover, remote attestation in Senghuz is a simple process because the Enclave key is only known by the Enclave and software provider. Therefore, if the software provider receives an encrypted message from the Enclave that is successfully decrypted, it can be sure the Enclave is running on tamper on a right node. Another important feature provided by Senghuz is secure I.O. Senghuz uses memory mapped I.O. to communicate with the devices. Therefore, to gain exclusive access to a device, it is sufficient to map the data section of an Enclave over the MMI origin of the device. So nobody expects the Enclave would be able to access the I.O. devices. So there are a range of architecture for hard way-based trusted computing. Each of them is platform-specific and comes with a different system footprint. But all of them provide a strong security guarantee like attestation, isolation and things like that, which is implemented in different ways. That means what S.JX is doing is something different from, for example, what Senghuz is doing. But the baseline is the same. So you can build upon this guarantee and you can use that in your system and construct system that are highly secure, even from a 16-bit microcontroller that communicates over the kilo to an end user devices or something like that. In order to design end-to-end secure smart environment, as I already explained, the data behind our implementation is based on the concept of authentic execution. This framework was proposed by Norman in 2015 to provide a strong assurance of the secure execution of a distributed application. According to the paper, they want to guarantee if an output is produced by the application, it is actually allowed to be produced by the application's source code and physical input event. In this framework, each component of distributed application runs in a trusted execution environment and thanks to the confidential computing provided by the TES and authentication encryption of the data, end-to-end security can be achieved in our smart city. The implementation provided by Norman only supports Senghuz processors, but this concept can be applied to a generic TEE to support heterogeneous systems in a real scenario. Thereby, our framework provides an implementation of authentic execution that supports Intel SGX, ARM Trason and Senghuz to allow development of applications in a heterogeneous environment. The SGX part of our framework was entirely written in Rust with a framework called Fortanix Enclave Development Platform. In order to develop applications for Trason, open source opti framework is used, which provides a collection of truth change, open source library and secure kernel that is implemented in C. And finally, with Senghuz toolchain implemented the Senghuz Encalave. Our framework aims to minimize the developer's effort as much as possible, thereby it automatically deploys the Enclaves on the supported TEE and then attests them using remote attestation and finally provides a secure communication between different components with cryptographic operations. And thereby it provides the developer an abstraction layer in which they only need to implement the application's logic. So we ensure this framework provides end-to-end security for a safety critical smart environment. And a demo of our framework will be presented now by my colleague Gianloka. Thanks for your attention to me. Thank you, Sepideh, for your introduction. Now I will show you a simple example about how to use this framework to build and run a secure distributed application using Senghuz and SGX. A couple of words about me before. I'm Gianloka and I'm a PhD student working on the FIGOS project promoted by KU11 and Ericsson. My work is about the integrity assurance for multi-component services and third-year networks. So this is the setup of our application. You can see that there are two Senghuz nodes and one SGX node and we have also two input and output devices. We have a button here that will be the input of our system and we want to perform some operation every time the button will be pressed. And here we have a LED instead and this LED will be toggled every time the button is pressed. Instead in the SGX node we will store the number of button presses and also we will provide an interface for the deployer and external users to interact with the system and get this value. The deployer is responsible for deployment and configuration of the modules and their connections. The user instead can be anyone connected on the same network and the user can retrieve the number of button presses by performing HTTP requests. Okay now we will see the deployment phase of our application. We provide as input to the deployer all the source files of each component of the application called, these components are called modules and also we provide a deployment descriptor which is a JSON configuration file which contains a description of all the nodes, modules and connections between modules. So the first step is to build binaries of every module and then send them to the corresponding nodes. On every node there is a untrusted software which is called Event Manager. This Event Manager is responsible of exchanging events between modules but also as shown in this case Event Manager is responsible of loading the modules inside the nodes. Of course the Event Manager is untrusted so we want to make sure that this loading process has been done correctly and for this reason we also perform a model testation and we want to make sure of first of all that the binaries have not been modified during the process and second of all that the modules are loaded inside the task execution environments. Also during the process we will establish secure channels between the deployer and modules using module keys. Each module will have a unique module key. After that now the modules are running and they are tested and now we will establish the connections between these modules. We take as input the same descriptor as before and we start with the establishment of this connection. First of all we establish the connections between modules and IU devices using the Sankus Securario functionality. After that we establish all the other connections between modules and between the deployer and modules. So here the blue arrows are the secure communication channels and every event exchange will be encrypted and authenticated and here the red arrow is just the HTTP channel between the user and web server so it's unsecured. Okay this is an example of the source code of the controller module and the thing to notice here is that the every module declares outputs and inputs. So a connection between two modules connects the output of the source module to the input of the destination module. So here with this controller module we write some special instructions to declare outputs and inputs. For controller we have two outputs, toggle led and increment presses and one input button pressed. But how to declare these connections between modules? This is done in the deployment descriptor. So in this example I declare two connections. The first connection is from the button driver to the controller. The second connection is a direct connection from the deployer to the web server. Okay now I will show you how this works in practice and if you are interested in this example you can go to my repository, you can see all the source code, you can see the tutorial if you want to run the demonstration of your local machine and don't worry if you don't have Sankus or SGX because you can run just the native version of this example and also I provide some Docker images so if you just want to run this example without installing anything on your machine you can do it. So if you go to my repository you can find all the tutorials for this. Okay this is my terminal now. Okay at the top right I have the SGX node and at the bottom right I have the Sankus node. Here I use just one single Sankus node for practical reasons but the application will behave the same. And as you can see here the deployment has been already made so all the modules are running and they are tested on each node. Here at the top left window this is the terminal of the deployer. So here I will send some commands to the system. The first I want to send is this is to initialize the web server so I will send an output command, an output event for the inner server connection passing as the input as argument the port I want to use. So if I trigger this output you can see here that web server now is listening to this port and if we try to reach this web server we get the number of button presses. In this case of course the number is already just zero. But if you want to trigger a button press of course you can do this physically. You can trigger the button physically. The problem is that I'm not physically close to the Sankus node so I cannot physically press the button and so for this reason I use another connection to simulate a button press but notice that this connection is still secured so we still have the security features for all the other connections. And so I will trigger another output of the trigger button connection and if I do so you can see that something happens. First of all the button driver notifies us that the button has been pressed. The controller has received the remote event on the button and the led driver has been turned on. And if we reach again the web server now we see that the number of button presses has incremented by one. But if we don't trust this HTTP communication because of course HTTP is not secure we can perform another request from the deployer side. In this case I will send a request which is the same principle as input output connections but in this case we expect a value in return. So we trigger a request of the connection get presses and here you can see that we get as response the number one. So in the end yes this is working properly and the button has been pressed only one time. Okay that's it. Coming back to the slides so in the end this framework provides strong integrity over the application behavior because we can only toggle the led if we press the button and we can only increment the value stored in database if we press the button. If we don't do it there is another way to do to increment the value of database or to toggle the led. But we have also confidentiality of the application state and sensitive data thanks to the execution environments and thanks to the secure communication channels between modules. We don't have availability because if an event is lost during the transmission nothing happens. So in the end this is still an ongoing work and as I said before we are providing support for class don which will be available soon but still we are working on improving the code deployment tools and the SGX side. So this is it thank you for your attention and now if you have some questions thank you.
|
In this talk, we present an open-source framework to develop heterogeneous, distributed enclaved applications. The main feature of our framework is to provide a high level of abstraction over the platform-specific TEE layer and over the secure communication between different modules, leaving to a developer only the task to write the application’s logic. We provide a notion of event-driven programming to develop distributed enclave applications in Rust and C for heterogeneous TEEs, including Intel SGX, ARM TrustZone and the open-source Sancus. This heterogeneity brings our work to a broad range of use cases, which include cloud processing, mobile devices and lightweight IoT. Our framework ensures strong security guarantees based upon mutual attestation of security-critical software components.
|
10.5446/52522 (DOI)
|
But you're here and we're going to make this a great event, despite the circumstances. Let me maybe also start by thanking everyone involved, so the speakers, the hosts, also for some organization for all the enthusiasm to make this happen. Some of you might also have joined us last year in Brussels physically for the first edition of this deaf room, and I'm very excited to announce that because of the enthusiasm of the speakers, we were able to make this into a full day event this year. So let me maybe show you the program if you can see that. So we have a full exciting program this year with several breaks also, this is an online event, so I try to make sure that there is some breathing room for the audience and also some technical problems, so you can get up for a coffee and there's even a lunch break in between. So the talks are kind of grouped in several, what I think are key areas in this emerging open source ecosystem, right? So before noon, the morning, there's a whole lot of talks on what I call shielding runtimes here. So, so, programing, development environments, right? And then there is a smaller block on trying to understand the limitations of these technologies to attacks. Of course, since we are talking here not only about open source software, but also open source hardware, I think it's very exciting to see over the last year what has been happening in the ISC-5 area. So I'm really looking forward to those talks as well. And then finally, we also decided that we don't want to scope this only to, let's say, the traditional enclave architectures, but also want to include this more broader paradigm of truth to heart way and alternative, let's say, next generation emerging security extensions as well. So those I think are also very interesting to keep an eye on. So altogether, what is definitely wants to do is to bring together the open source ecosystem around this emerging technology, right? You guys who are all here. And I think it also shows that this is a growing area. And well, a good thing, I think, is that this computing is often a bit criticized for maybe not being open or for enabling DRM and all these kinds of applications, but seeing that we can fill a full day program here with this exciting open source projects, I think means that open source community is really engaging in this very actively. And that makes me very optimistic for the future of this type of technology. And that's maybe a nice line to end on and hand over to the first talk, which will be the NRX system by the guys from Red Hat, which should start.
|
A brief introduction to the room and to the sessions.
|
10.5446/52523 (DOI)
|
Hello everyone, welcome to my talk, hardware-based CPU underwalting on the chip. I'm Zi Tai Chen, and I will introduce our $30 tool, which can do CPU underwalting and also some exploit we found using it. Before we talk about underwalting, let's talk about TEs. TE stands for Trusted Execution Environment. Most manufacturers have their own worry of TE. For example, ARM has trust on, Intel has SGX, AMD and IBM also have similar technology. It guarantees code and data loaded into them to be protected with respect to confidentiality and integrity. And you may wonder what's the threat model of these TEs. And here shows some screenshots of companies and projects that use TEs. So on the top is the Fortinix project. It says Intel SGX allow you to run application on untrusted infrastructure without having to trust the infrastructure provider. And also on the left is a open source project that uses SGX. They consider their project can operate under the threat model that they don't trust the host owner. And as shown here, so by the adopted threat model, consider OS, owner, and infrastructure advanced trusted. A blog post from Intel's website also says SGX should protect the code and data inset data from attacker who has physical control of the platform. However, in previous years, researchers were able to break the integrity provided by ARM TE, which is a trust zone. In ARM, there is a feature called DVFS, Dynamic Voltage and Frequency Scaling. Attacker can control DVFS by writing to a set of memory mapped registers. And DVFS will also change the clock frequency or tell the power management to change the voltage. Using this feature, attackers were able to inject fault into code running set trust zone. So clock school use DVFS to change the clock frequency and to calculate error. And what's wrong here? You see the system to change the voltage of the ARM processor and a cost fault. As ARM have this system, it will wonder if Intel have a similar feature which allows you to change the CPU voltage. And then maybe attacker can inject fault into Intel TE, which is SGX. Actually, yes. So in previous years, the Plunderwater attack discovered that on Intel processors, there is a MSR150, which can be used to change the CPU voltage. And they are able to cause memory corruption and also fault in multiplication and encryption computation inside SGX. Regarding Plunderwater, Intel published SecurDiAdvisory. And they suggest user to update to the latest BIOS word. We applied this BIOS update on our look. And we found that there is a new BIOS option shows up, which is real-time performance turning. And also on the notice, Intel has a more secure configuration. We found that this option allows user to enable or disable MSR for voltage turning. And this is a mitigation for Plunderwater, because after this is disabled, the software cannot control the CPU voltage anymore. And also some manufacturers simply disabled this option. And we found that this option allows also some manufacturers simply disabled this interface without giving any option in BIOS. So basically, on this diagram, this interface is disabled. So a voltage regulator cannot be controlled directly by a software anymore. However, there is still a physical connection between CPU package and the voltage regulator. If we can control this interface, we can directly talk to the voltage regulator and change the voltage. So we start to look into this interface. First thing we looked into is the Intel CPU data sheet. We found that this interface is called SVID bus. And it's a three-wire interface, which has clock, data, and alert. And alert is not required in the actual implementation. And the clock frequency is 25 megaphones. So operation voltage of this is between 0 watt and 1 watt. However, we didn't find further information regarding to this SVID bus on Intel data sheet. But this information is enough for us to identify what the SVID signals look like. And then we are able to recover the protocol of it. The first thing we need to do in order to inject packets to SVID buses to find where it is. So we firstly find the voltage regulator on the motherboard because this SVID interface connects the voltage regulator and the CPU. So after finding the voltage regulator, maybe with the data sheet of it, we can find the piece for this SVID bus. So as you saw here, the chip is usually next to large capacitors around the CPU. Then we start to search the data sheet of it on the internet according to the power number. But the only information we can find is the data short on its official website. At first, we thought it's a typo. But after opening it, we know why it's called data short. It's only one A4 page long and gives no information about this interface. So the only option left to us in order to locate this bus is to do some probing and found signals that are similar to this back which found on the data sheet. So we pick up a c-scope to check all the resistors around this voltage regulator like this because we know there would be some pull up resistors connecting to this bus. So after some probing, this signal shows up on the screen. It's a 25-microhertz clock and also it has data line. They operate between 0 and 1 volt, which matched exactly to the spec we know for SVID bus. Here is two points we found. Actually, there are two test points left to motherboard manufacturer for these two wires. So after finding where this interface is on the motherboard, we also need to know the command and package structure in order to control it. We searched online for this protocol and found a screenshot of a protocol analyzer. So we were able to discover the package structure of the SVID signal from this screenshot. We also found that there is a field called VID code in this VID packet. So the voltage, the value for voltage is not directly encoded in this packet. So each VID code mapped to a specific voltage value. In order to change the voltage to the expected value, we also need to reverse engineer this voltage identifier. So we first use the software and the voting method to change the voltage and capture the signal on SVID bus using a CSCOP. Mainly, we want to have this setVID package and get the VID value out of it. So we send the setVID command with the VID code we get from the CSCOP capture and also use the CSCOP to measure the CPU voltage. So if the packet is constructed correctly, we should say the voltage changed to the corresponding value. In this way, we are able to reverse engineer the VID mapping and verify it. Here shows the complete SVID protocol as far as we know. On the left shows the SVID signal and data frame. It adds a field of bits indicating the start and end of the frame. And there are four bits indicating the address, five bits for commands and four bits for VID and party bits. On the right shows how VID is calculated and some VID commands we discovered. The most useful commands for this project is this one, setVID fast, which is exactly the command we use to change the CPU voltage. So then we start our journey with this SVID bus. Firstly, we solder two wire-stained wires out of this interface. It may be hard to see on the picture, but actually there are two wire-stained wires connected to a bus driver. And so we also use the TIN-Z 4.0 to send the command via the bus driver. We wonder why we need the bus driver, because actually this is because the SVID bus operates under wire-mode logic, but the microcontroller we use can only output signal in 3.3 logic level. So we also built a multilayer firmware for TIN-Z with a modified SPI driver. In order to control the timing of the voltage change, we implement the trigger functionality using GPIO pins and USB serial connection. After receiving the trigger signal, TIN-Z will inject SVID packets to change the voltage. It's also worth mentioning that our tool can be built with only $30. So with VoughtPillager, we can change the voltage for three times. Because the changing rate of the voltage is fixed, we can firstly lower the voltage to prepare for the glitch, and then further lower the voltage to eject a fault. This can make the glitch and the fault voltage shorter. And after reaching the fault voltage for a while, we can change the voltage back to normal. The figure on the right shows the statistical capture of the glitch injected by VoughtPillager. As you can see, after we send the satellite packet, the voltage drop, and then we change the voltage back, the voltage is changing back. So let's inject some fault. We created the library for underwalking. So here, the configureGlitchWithDelay function will send configuration to VoughtPillager and arm the trigger. Then triggerSet will send the trigger to activate the glitch. And triggerReset will reset the glitch. We tested all the proof concepts of the underwalk, and we found that we can reproduce all of them. Furthermore, we are able to fault MBDTRS, AESNI, and OpenCLIU file encrypted. We also found a new type of fault, which is delayed write fault. Let's begin with multiplication fault. So here is our proof concept. On the top of the screen, it shows voltage, trigger, SRED data, and clock signal. We are running multiplication while changing the voltage. After a few attempts, a fault happens. And as you can see here, here is a voltage drop that causes a fault. We also did some benchmarking with this simple multiplication fault. Since if we lower the voltage too much, so always just crash before you observe a fault, it is important to know the range of voltage we can use. Luckily with the experiment, we found that there are usually four-way at least steps between the first fault and the crash voltage. We also measured the accuracy of fault penetrator and found that about 75 of the faults can fall into 600 iterations of multiply calculation. After having a successful fault in multiplication, we started to fault encryption operation. We tested it with SZX-CRT-RIC proof concept of Flonervolt. And we are able to cause computation error and we covered a private key from the 40 signature. We are also able to fault SZX-ASNI proof concept. And open-clave file equipped which use ASNI instructions and recover the key using differential fault analysis. Here is our demo on open-clave file equipped. We run the encryption operation for several times while doing voltage glitching. And after a few attempts, we successfully injected a fault. Then the result can be analyzed using differential fault analysis to recover the key. As said earlier, another new type of fault we found with what penetrator is a delayed red fault. Let's have a look at this code. So we firstly set up prompt 1 and 2 to have the same value. And then we keep increasing both of them and check if they are still having the same value in a text loop. If the calculation is correct, this line of code should never get executed. However, with fault injection, we found that sometimes this variable named 40 is set to 1. To investigate more, we looked into the assembly of the code. And we conclude that this is because one of these two add operation were not committed when compile happens. So when the execution reaches the compile operation, they are actually holding the different value. This is a new type of fault observed using what penetrator. We did further investigation with software and working and found that this fault can cause out of bound and overflow and overflow of the array. This video shows the out of bound and overflow and overflow. This is the target code that we want to inject fault. So normally, this code should only write to element 1, 2, 3, 4 in the array. However, as we lower the voltage, element at index 0 was already done. And here shows that we also cause array overflow. Element at 5, 6, 7, 8 were already done. After found this vulnerability, we reported it to Intel on 13th March, 2020. And here are Intel's reply. They consider opening the case and tampering of internal hardware is out of SDX threat model. However, as we talked earlier, the threat model of TE clearly said it should protect against a attacker who has control of the platform, such as infrastructure providers. And this threat model is widely adopted in the community. So maybe we need to rethink of the threat model for SCX. Can it still protect against such attacks? In summary, WaterPledger is the first hardware-based unloading against Intel CPU. And it can bypass the mitigation implemented for Planovault. And it's a budget tool which can be built with only $30. With our findings, we should rethink of Intel SCX threat model. Can it still protect SCX from attackers who has control of the hardware? That's all for my talk. Thank you for listening. You can find more information on this website. So if you have any questions or anything you want to discuss, welcome to reach me after this talk. Thank you.
|
Previous work such as Plundervolt has shown that software-based undervolting can induce faults into Intel SGX enclaves and break their security guarantees. However, Intel has addressed this issue with microcode updates. We later discovered that there is a physical connection on the motherboard which allows us to control the voltage and conduct fault injection. In this talk, we will present a low-cost device: Voltpillager, which use this physical connection to break the guarantees provided by SGX again. On a standard motherboard, there is a separate Voltage Regulator (VR) chip that generates and controls the CPU voltage. Our tool, VoltPillager, uses this to connect to the (unprotected) interface of the VR and control that voltage. Based on this, we then mount fault-injection attacks that breach confidentiality and integrity of Intel SGX enclaves, and present proof-of-concept key-recovery attacks against cryptographic algorithms running inside SGX. Our results may require a rethink of the widely assumed SGX adversarial model, where a cloud provider hosting SGX machines is assumed to be untrusted but has physical access to the hardware.
|
10.5446/52528 (DOI)
|
Welcome to the talk Exploiting Interfaces of SCVES Protected Virtual Machines. This work was already presented at the ROOTS conference and was co-authored with Matthias Moritzer. The work was done at the Fraunhofer Institute for Applied Integrated Security. The focus of this talk is a recently available feature called AMD SCVES, which provides sound protection for a virtual machine against a malicious hypervisor. However, the virtual machine may still run a general-purpose operating system, for example one based on Linux, which has previously considered the hypervisor as trusted. This can potentially lead to security issues because the virtual machine communicates with virtual devices, which are controlled by the hypervisor. And also the virtual machine requires emulation of few special instructions done by KVM. In this talk, I'm going to show three different attacks for SCVES virtual machines and also how their fixes are done in the Linux kernel. So let's just quickly go through how the virtual machine is protected. So first, there is memory encryption for the virtual machine. While this includes the virtual machine's firmware, bootloader, the kernel and user space applications, however, because the virtual machine communicates with the outside world, for example, the DMA region is not encrypted. The virtual machine selects which memory is encrypted or not by setting a spatial bit, the CBIT, in its private guest page table. Additionally, with SCVES encrypted state, additionally, the architectural state for the virtual machine is encrypted. This includes architectural registers and other state information. And also SCVES is now supported in Linux 5.10 officially. Because the registers for the virtual machine are encrypted, the emulation of instructions becomes more complicated. For example, when the virtual machine executes CPU ID, this will raise a special exception, a VC exception, shown in one, which would invoke the virtual machine's VC handler. The VC handler can then write the necessary information in the GHCB page, which is unencrypted and shared with the hypervisor, and then request emulation from KVM. KVM can read the information from the GHCB page, emulate the instruction, write the new information to the GHCB page, and resume the guest. And then the VC handler can read the new registers from the GHCB page, validate them, and update the architectural state, and execute IRED. However, because emulation happens in KVM, the emulation of this instruction should not be trusted. The threat model, I'm assuming, I know of these attacks is that the attacker controls the host before the virtual machine is launched, and the virtual machine uses a CVS for protection. Additionally, the virtual machine initial state is measured and attested, and also the attacker has access to the virtual machine's kernel image for static analysis. The first attack is based on the fact that the hypervisor may be able to manipulate the sources of entropy for the virtual machine. The Linux kernel inside the virtual machine relies on the source of randomness to generate random values for different probabilistic defenses, for example, kernel address space layout randomization, and also for stack canaries. Here in this talk, I'm just going to focus on KSR. This is a feature which randomizes different regions for the Linux kernel, and it happens during boot. The sources of randomness include the RDR and RDTSC instructions, also the kernel build string, and also the boot parameters structure, mostly initialized by information from QM. The function which determines the random values is called KSR getRandomLong, and here's how it looks like. Initially, the function generates a initial random state using the getBootSeat function, which returns a hash of the build string and the build parameter and the boot parameters. However, because the attacker has access to the busy image, the build string is known to the hypervisor. Additionally, also the state of the boot param structure is known. Afterwards, this code checks whether the RDR instruction is supported, and if that's the case, it samples the random value. However, because this feature is checked using the CPU ID instruction, the hypervisor can say that this feature is not supported, so this code here would be eliminated. Afterwards, the virtual machine checks whether the RDR instruction is available, then samples the timestamp count and mixes this into the random value. However, the hypervisor is always able to control the return value of the RDS instruction, because the virtual machine may need to be moved to another CPU where the timestamp count is different, so it's necessary to be able to modify it. One way to do that is by marking the RDS instruction as an emulated instruction, and this would allow to specify any value. However, with ACVES, this would cause a special exception, which would involve the VC100, and it can be easily detected. The second option, which is much sneakier, is to use a special model-specific register called TSE ratio, and also the field TSE offset in the virtual machine control block structure. The RT-TSC instruction would use these values in the following way. The ratio MSR would be used as a scale, and then the TSE offset would be used as an offset. If the hypervisor sets both to zero, then the RT-TSC would just return zero. I implemented the proof-of-concept exploit for an ACVES virtual machine, where I tried to manipulate the case-award regions and also the Linux entropy pool. First of all, we can't actually set the RT-TSC instruction to return zero for the entirety of the time, because that instruction may be used for various different things in the kernel, and you will get a hang if you do so. So here in my exploit, I rely on page tracking in order to figure out the moment of time, the period of time for which the case-award regions are being randomized, and for all of these places, I can specify RT-TSC return value of zero. And then afterwards, I can set the ratio to 1.0. With that, I was able to manipulate the case-award offsets and set the entropy between boots to zero. And also, I was able to manipulate the real state of the Linux entropy pool, which means that the first sequence, for example, of kernel stack on RIS was always the same between boots. So how the fix for this looks like? This was already added to the Linux kernel. Here this code checks whether the RDRanth is supported, and that's not the case, the virtual machine will simply not boot. So this essentially makes the RDRanth instruction mandatory for ACVES virtual machines. The second attack is based on the forging of fake MMIO regions. So the virtual machine's kernel uses memory mapped IOTO programed touch devices. With ACVES, MMIO is more complicated because the registers of the virtual machine are encrypted, and for that, ACVES introduces this MMIO NPS sequence. For example, when the kernel writes a value to an MMIO address, this address would have to be translated first from a virtual address to a guest physical address using the guest page table. Afterwards, it would be translated using the nested page table. And in this place, the virtual machine requires assistance from the hypervisor to mark the page's reserved. The hypervisor can mark the page's reserved by setting a special bit in the nested page table entry between the 52nd and the 63rd bit. The access will cause a VC exception, which will be handled by the VC handler inside the virtual machine. The VC handler can check what caused the exception, find out what's the MMIO address and value, and write them to the GCCB page, and then execute VMG exit to request KVM to emulate the MMIO access. What I found in the early series of ACVES patches was that the VC handler does not validate the faulting address, which means that the malicious hypervisor can mark any memory of the virtual machine as an MMIO region, which would allow the hypervisor to intercept the MOVAX accesses to leak data or to inject data in the virtual machine. Here I'm showing a proof of concept attack I checked with ACVES. Here a user space process inside the virtual machine allocates a page using M-App and then fuels it with a sequence of knobs, and then afterwards it adds a return instruction on the very end. Afterwards the process finds out what's the guest physical address for this page and reports that to the hypervisor using VM call. That's only to simplify the attack. Afterwards, the virtual machine allocates another page using M-App, but now this is a retry execute page, and then it copies the earlier buffer into this code buffer and calls into the code buffer, and then on the very end it prints out exiting. However, because the hypervisor is able to mark the Arrest Buffer as an MMIO region, it can intercept all the reads and provide any value the hypervisor wants. So if we don't run the attack, we see the output as expected, so we send the guest physical address or the frame number to the hypervisor, and then we execute the sequence of knobs and then we exit. However, if we run the attack, then we see another message that you got pwned, and then the list of files in the directory. The payload here was just a sequence of syscalls to execute that. The fix for this is also really simple. It's already added to the VC-MMIO handler in the Linux kernel. This code here checks that the MMIO access happens to an encrypted page, because that's where only the MMIO regions should be. If it happens that the MMIO access is in an encrypted page, then it means that there's either a kernel bug or there is an attack happening. So in this case, the code will return ES unsupported, and this will cause kernel panic. A similar fix was also added to the open virtual machine firmware. Let's move to the last attack. So earlier I mentioned that the virtual machine has to set a special bit to mark memory as encrypted or not, but I didn't specify how the virtual machine finds out what's the location of the CBIT and also how does a general-purpose kernel, why Linux, determine that it's using ACV at the moment. The code for finding out the CBIT location is shown here. It's in getACV encryption bit. So first the code executes CPU ID to find out whether it's running under a hypervisor, and then afterwards it checks whether it's run with ACV by again executing the CPU ID instruction with the correct leaf. And if it's using ACV, then the CBIT location is extracted and returned. The issue with the code here is that the CPU ID instruction is always intercepted, and the results for these return values should not be trusted. If we, you know, so a place where the CBIT is used is when an identity mapping of memory is created, where the physical address matches the virtual address. This happens in the early Linux boot code. First the ACV encryption mask will be called, which will find out what the CBIT location and then afterwards initialize and then team maps will be called to create a page tables and set it as shown below. This happens by writing to the select to register the variant. If we provide invalid information for the virtual machine, for example, that ACV is currently not used, the virtual machine will crash when it's booting, but it's still interesting to find out why this happens. One possibility is that because the memory previously was viewed as encrypted and the new page table was written there, maybe the page table locker would now access the cipher text and this would cause an invalid page table walk. However, reading the manual, that's not the case because page table walks ignore the CBIT. They always decrypt while doing the walk. Another possibility is that when the instruction gets fetched, maybe it's an invalid instruction because now it's actually possibly accessing the cipher code. However, that's not also the case because instruction fetters would also always decrypt the data, so the instructions will be valid in this case. But still it will be interesting to find out what's exactly causing the crash. So the function call initialize and then team maps gets called, which will store the instruction pointer onto the stack, which is now still viewed as encrypted. Once the guest page table would be created and then written and then set by writing to the CR3 register. However, all of the memory now is viewed as unencrypted, which would mean that any access to the stack would just read the cipher text, but not the plain text. When the code executes the return instruction, then it will jump into invalid address and this will cause the fault. However, the hypervisor can prevent this by, for example, populating the stack with other sets to rob gadgets from the busy image at the precise time. The hypervisor can then use this to do return-oriented programming in the virtual machine. However, the hypervisor can still not do much because, for example, code injection or the creation of new page tables requires that they are written in encrypted memory. However, now all of the memory is viewed as unencrypted. Additionally, no secrets can be stolen because they were written encrypted, but now the memory access will not try to decrypt the data. However, there is still the possibility to continue the attack by relying on another page table, which gets created earlier. For example, the open virtual machine firmware has to create its own page table in order to communicate with the outside world and also for the GHCB page needed for ACVES. We can potentially use that page table so that we have both encrypted and unencrypted memory. This would allow us to do code injection, for example, and continue the attack. One opportunity is to put the stack into the decrypted memory of the OVMF guest page table and then switch to that page table. However, because that memory is not mapped, we can't actually just naively put it there. However, I was still able to find a gadget which accomplishes that in one goal, that's in the busy image. That one first modifies the guest page table, so we're able to switch to the OVMF guest page table and also allows us to modify the stack pointer so we can position the stack into the decrypted memory. Then finally, it also gives us access to the instruction pointer, which means that we can easily continue the attack. Here, I'm going to show our demo for arbitrary code execution with an ACVES virtual machine. While it's booting, this just starts rendering this demo effect and then blitz.ianocore logo. Then after it has finished it, it resumes booting. How do we fix this? It's a little bit more complicated and these are the patches you can look through. Similar fixes were added to both Linux and also OVMF. Here I'm just going to show briefly one of the fixes. First, it relies on the RDRUN instruction to generate a random value and to save this random value into a memory location and also in the RDX register. Afterwards, the new page table would be loaded by writing to the CRT register and then there would be a comparison between the RDX value and also the one stored in memory. If it happens that the new page table is corrupted, this would read the ciphertext from memory and the check will fail and it will go into this red portion at the very end. This portion will cut up the stack and also enter an endless loop. Still, maybe the last interesting thing is to think about what these exploits based on manipulating the CPU ID instruction would work with ACVSMP. For that, I just briefly go through how measurement of the virtual machine works normally. So, the measurement includes the OVMF images and also the virtual machine save area when the virtual machine is before it started. This gets measured and signed by the AMD firmware and then it's given to the hypervisor who can send it to the remote virtual machine user who can then afterwards verify it. With ACVSMP, additionally to the measurement, a special page called the CPU ID pages code which is agreed upon between the hypervisor and the remote user and contains information about well, many of the CPU ID leaks. This essentially means the hypervisor can no longer write about CPU ID because the virtual machine's VC handler when the CPU ID instruction gets executed can just query the CPU ID page for the real information. So, final remarks, here we saw three different exploits for ACVS with Linux kernel based on missing hardening. Here we also assume that the attacker has full privileges which means that these attacks may not be possible if that's not the case. Additionally, all of these issues were fixed with Linux 5.10 and also OVMF. With this near future coming out ACVSMP, it might make sense to for example examine all the interfaces with the outside world and try to reduce them in order to reduce the attack surface, try to add more validation to data provided by the hypervisor and so forth. Also, I would like to thank the AMD team and also your curiole from SUSE who have been very helpful in the communication and also in doing most of the fixes for these things. If you have any questions, now it's a good time to ask. I think we go live very, very soon. So I don't know, maybe it might already be live now. So if anybody can hear this already what I'm saying, I think we see Martin and you can confirm. You have very, very nice presentation and good work. I think one valid question that was asked in the room really is how many of these attacks are really possible because you run Linux in the VM. So whether, okay. Yeah, right. I mean, of course to a large extent the reason for that is that Linux is a, let's say a legacy kernel which has previously tested the hypervisor and also external devices. So of course now if it's operating under another threat model, there will be certain security issues so I can some interfaces not being, not being harder, hardened. Yeah, I mean, but then again, even if it weren't Linux probably similar issues might have been in some other kernel. I mean, even if it's a, let's say a CO4 or some other micro kernel, still that's probably not in the, in the threat model for those operating systems. So then you still have to add some kind of hardening. So you think like it's maybe more a general issue which is due to the fact that kind of the roles are a bit switched to something like, like Linux VM normally would trust the hypervisor. There's no reason to mistrust it and now some needs of VM runs basically at a higher privilege level to some extent. So I think one question I had in that regard is, so you found some issues, but obviously there could be many, many other things that are somehow handled by the hypervisor. And did you, how did you do your work? Did you go through like the whole code or what was your approach basically to find these bugs? Yeah, so it was not a systemized approach. I was just reading stuff on the go and I mean, there are other interfaces which are also not explored. That's exactly the reason why I also said that for ACVS and
|
Supported since Linux 5.10, the AMD SEV Encrypted State (SEV-ES) feature can be used to protect the confidentiality of a virtual machine (VM) by means of encryption and attestation. Although the memory and registers of the VM are encrypted, the VM still communicates with the hypervisor for the emulation of special instructions and devices. Because these operations have not been previously considered part of the attack surface, we discovered that a malicious hypervisor can provide semantically incorrect information in order to bypass SEV-ES. In this talk, I provide technical details on the handling of special operations with SEV-ES, practically show how the original implementation could be exploited, and finally I show how the interfaces were hardened to fix the issues. This talk includes four different attacks which: 1) use virtual devices to extract encryption keys and secret data from a virtual machine. 2) reduce the entropy of probabilistic kernel defenses in the VM by carefully manipulating the results of the CPUID and RDTSC instructions. 3) extract secret data or inject code by forging fake MMIO regions over the VM’s address space. 4) trick the VM to decrypt its stack and use Return Oriented Programming to execute arbitrary code inside the VM.
|
10.5446/52529 (DOI)
|
Good morning and welcome to my presentation on OpenStreetMap in Africa. Let's just give you a quick overview and insight into OpenStreetMap on the continent. My name is Inok. I am an open source, free, live, open data enthusiast. You can find me on Twitter and that is my email if you need to get in touch. So yeah, this demonstration is going to be about OpenStreetMap, all about OpenStreetMap, OpenStreetMap. We'll look at force4g as well. We'll also talk about data, what has been done, what is there. We'll also look at needs once and then the issues and how we can address them or how we have tried to solve some of these issues or how they have been solved. And then we'll look at the state of the map Africa of course, the community is human. Notwithstanding the corona year last year, we still be meeting physically of course, so state of the map Africa just like state of the map global. So quickly we'll look at what is OpenStreetMap. There is the possibility that you are watching this talk about OpenStreetMap and have an idea about OpenStreetMap already, but just quickly the definition on the homepage of OpenStreetMap is pretty much self-explanatory and interesting that OpenStreetMap is a map of the world created by people like you and I and then it's free to use under open license. So because OpenStreetMap is free to be used under open license and it's a global community, it gives everyone a global playing field for us to be able to work together, notwithstanding the barriers, we are able to still communicate and participate and also whatever could be done with this product in Europe, in America, could also be done in Africa, maybe Ghana, in Togo, in Benin, it could still be done just like how free and open source software gives you the possibility, the freedom to run, share and then modify and then do whatever you want to do to the software. So last year my colleague Joffrey made an interesting presentation about OpenStreetMap in the state of OpenStreetMap in Africa during the online state of the map global which was scheduled to be in, to schedule to happen in Cape Town, South Africa, but unfortunately due to the situation which we all know it didn't happen. So quickly Joffrey's presentation spoke about the community and then using the indicator of number of buildings per country across the continent to rank them. So this animation presentation basically uses the years and then the amount of buildings mapped in this country to do a quick demonstration or analysis on what has been done. Also this presentation is from research which collected information, it's a survey from participants of people across the continent who are leaders or actors in various communities such as OSM, Benin, OpenStreetMap, Côte d'Ivoire and so on and so forth. So from the, we will talk about OpenStreet Africa later, from the presentation it highlighted staffs like the definition of membership, the communication channels, we could see on the slide at the communication channel used mostly by communities across the continent is of course the Facebook family, Telegram catching up, OSM forum and then the mailing list even though it's available per country depending on the, it's less used. That tells you about less email culture or mailing list culture. So organizations or these communities are informally structured and then it's not compared to how you have OSM structures in Europe or across, but this is varying, it's varying depending on where we are. So also there are some issues in communities but quickly we could just talk about successes. You could see there have been a lot of successes and counting. There is from public transport mapping of informal transit called paratransit to from disease mapping, Ebola, community map patterns and then communities transitioning from being loosely organized, enthusiasts to becoming a legal entity that is not for profit in respective countries and communities or countries where there is none or there were no communities or any activities of locals doing open street map has changed or there is also an active community or kind of couple of people contributing or doing something related to open street map. This slide talks about challenges and then challenges of course, challenges lack of tools, basic needs from mobile phone to laptops, internet of course, internet is one issue of, we might have if you have traveled to but we are catching up if Africa has missed the copper age now that fiber optics is the future and then a couple of fiber optic supplies keeps popping up and increasing the capital. For example in Accra there is fiber to the home so maybe hopefully we missing the copper age is a frog leap to the fiber optic age which is the future I was discussing with a friend recently and then this was what we took home from one another. Challenges continue, we have administrative challenges of course gender imbalance is a global issue and there is low open source culture but in going forward I will show this how this is increasing and we are trying to improve this as well. So we can also look at the lack of volunteer culture so that quickly brings me to the Maslows hierarchy of needs because when we are talking about individual needs in maybe other parts of the world for example in Europe, in America where the living conditions of citizens seems to be much more better you don't think or you don't care about basic needs according to Maslows triangle you focus on the psychological needs you think about friendship going forward into volunteering and following your interests passion but realize there is insufficiency I mean let me not say insufficiency there is not the motivation because there is nothing in it for you because you need the basic need you need money so if you tend to find out about open street map what first question get access, am I going to be paid for it or be enumerated for doing this so you can see people coming into felt volunteering when we run projects or activities as projects even though volunteers are required volunteers needs to be paid because these basic needs are not readily available. Also yeah Geofrease suggested solutions to some of these challenges you can take a look at Geofrease presentation it's on ccc media.ccc.de okay so we jump on to data so when we look at geofabric extract and we look at Africa Africa as today is 4.2 GB good compared to others maybe yeah and that looks quite good encouraging but let's go down let's look at country wise you could see from Geofrease presentation here some factors has led to the boost or increase of data for let's say two years ago when you check this Ghana was actually where is Ghana quickly find Ghana Ghana was maybe less than 20 but now is 55 that tells you there is an increase something is happening you could see Tanzania and Uganda and so on and so forth so quickly we can look at the OSM start let's see contributions per population and we zoom in you could see the contributors 9 I think as of 10th you can just go to maybe 9th January and see what happened well in Ghana I had 11 togo 7 and then 12 we go to East Africa Tanzania which has had a great success from the the Ramani Hurya project which way back in 20 I think 20 this is 2014 yeah 2014 has built the resilience academy which was a partnership with Hort and the World Bank has proven to be a success and created a strong and lasting community partnership with government using force tools of course QGS and and so on and so forth good so we can now move to local chapters no no let's talk about support so due to issues with basic needs that we are required to you know that to function to really need internet to contribute to open streets map even though you could still do stuff offline with Osmand and other couple of tools but you still need internet for the first step penetration people require basic IT or computing skills which is not readily on the table because not everyone is even though you can use a smartphone there are certain stuff that people can do out of the box maybe using a basic word processor to type set the document might be a difficult thing but you know how to use a phone to use a smart application chart application to send a message so last year from the osmf micro grants program you could see two countries from Africa we have Nairobi Kenya I will tell you more about Nairobi Kenya and there is Uganda these are interesting cities for us in relation to community growth you can see and then this is openly available on osmwiki you can take a look at and the project is about details are here and so on and etc and then also Kenya Nairobi as well I think I open this twice can close this so for growth of communities we all know of all might have heard of the useful very important project one of the earliest ones map Kibera project that talks about mapping the slam in Kenya Kibera project that support resources and then information readily available to authorities and now moving to action also the 2014 Ebola response from HOT the Mediterranean Open Street map team and other mapping activities has contributed to of course the increase in this number of data we can also now jump to yeah HOT micro grant from I think 2020 or 2019 has also contributed to some of these and also the where is this the open cities Africa project I think I think I'll do open cities yeah the open cities project which has taken place in a couple of countries across the continent from Abigant to Zanzibar in Tanzania has also improved and then with the search communities as also work with communities to create important and live data which is accessible to government and citizens through the World Bank support and its partners so jumping to local chapter so in the beginning I think I mentioned that not all chapter so the community seems to be well formally structured and then you know the requirements to become a local chapter affiliated to the OSMF OSMF Foundation so last year we had as they say Republic Democratic of Congo becoming the first chapter official OSM chapter in the continent on the African continent so yeah we moving from somewhere to somewhere as well now I will jump to community there is state of the map global which we all attend and the state of the map Africa which state of the map organization of African communities was from a conversation I think some years back 2014 where a couple of us just random thoughts met in Tanzania and decided to okay and then this became a reality so from 2019 we had state of the map in Côte d'Ivoire that is Gran Basam and Abigant of course in Africa you know we love football a lot and football is very unified global game changer we all love football across the globe we all love football from the English Premier League to Bundesliga and so on La Liga and Coen we follow and we love them so from the first edition of state of the map Africa which was in Kampala in 2017 yeah in 2017 the last day saw a football game and then in 2019 when we were in Abigant also we had a football game so this a couple of photos from this from 2014 and then 2017 excuse me and then the recent one from 2019 where we were lucky to have of course Steve Coste founder of WSM present who attended this event which was in partnership it was jointly held with understanding risk of the World Bank and then Open Street Map Côte d'Ivoire and the OSMF Open Street Map Africa community we also had support from couple of partners and then grateful to OSMF and others for supporting us and giving us some level of support some level of support during this organization so this is from 2019 we had 19 participants which was we had a map at one during the event and this 26,000 to 3000 changes of course we threw money raised from sponsors and donations we were able to offer 19 travel grants with accommodation travel that is for if it's by road or by flight and then some stipend to 19 from West East and Southern Africa and we had 37 nationalities and it was a pretty interesting venue for this and then these were the partners and sponsors we had and then we supported by this OSMF, OSM France and then OSU of course and OMA in Kavdo so next year not to jump okay so we in Open Street Map communities across the continent also there is the usage of free and open source software especially in software coming from OSU, OSU Live is an interesting project that we try to use for our webshops and then because it's a liners distribution based on Ubuntu and contains of course JoSM and a couple of other tools that we use so QGI's which is Dominance Carta Cross we use it a lot I use QGI's personal everyday and then QGI's is used for the Ramani Huria project you can see QGI's were used to produce a map in Open Cities project you can see most all maps from Accra where I was involved in the project were produced with QGI's so we love QGI's we love Open Source and we love Phosphor G as well and the how do you call it reception the growth is kind of increasing gradually so in 2021 which is this year State of the map Africa will be in Kenya Nairobi and then okay there's an issue here it's supposed to be 2021 so that will be fixed so it's 2021 Kenya Nairobi we are looking at having of course more than this participants and currently we are looking for partners sponsors and donations whatever means to and then participants so if you want to help in any way just get in touch with the team and then we will get back to you and see how possible you could support participate and of course when we talk about Africa people not from Africa the first perception or what comes into the mind of course for a child cartography map I saw recently the demonstration of Africa was with animals of course so not to say that people from Africa are animals I'm from Africa I'm an African my national cities African Africa is not a country and so this epic picture that was used for the bid for state of the map Africa 2021 was a giraffe so yeah we invite you to state of the map Africa 2021 in Kenya Nairobi to have fun meet the community learn and share and then also get to know what we have to what we're doing connect opportunities and so on and so forth and just to end the presentation we like to show this video which you can check out from the address is on YouTube from DFDR from the last year's state of the map Africa we heard in this showcasing changes across Open Street map in Africa there's some selected cities some of them might have been just put this to two and then I'll mutate and then some of them mostly cities where they have been enormous you could see a distance and here probably know is South Africa Johannesburg forgive me and then we're into West Africa yeah West Africa has been really we say West Africa is seen a lot of growth because West Africa's active communities from across Nigeria to Ghana, Côte d'Ivoire, Bokina, communities seems to be growing faster and faster and then it's going to grow in North West Africa starting our issues and challenges so yeah the next time you are traveling to an African country brace yourself to meet the community and I invite you especially to state of the map Africa 2020 in Kenya Nairobi thank you very much and I leave you with this yeah go back to the last slide I live with this let the force be with you and thanks for watching bye
|
This presentation provides a brief overview on the rise of OpenStreetMap communities and activities on the Africa continent.
|
10.5446/52530 (DOI)
|
Hello everybody, welcome to our presentation. We give introduction to the OSGEO today. My name is Till Adams. I'm the chair of the OSGEO board and my colleague is... Hi, I'm Andilos Drotsos. I'm the OSGEO president and I'm happy to be on a Bosdum conference. So we're talking about the open source geospatial foundation, our OSGEO in short. And the goal is for us is to empower everyone with open source geospatial inspiration, software, whatever you want. And we're going to present some aspects of OSGEO foundation now. So, yeah, we want to empower everyone with open source geospatial. And how do we do that? We can state that OSGEO is a not-for-profit or a nonprofit software foundation. And what we do is we provide financial, organizational and also legal support to our project. What we also do is an outreach and advocacy. So we try to promote the global adoption of open source geospatial technology all over the globe. We have a lot of partnerships on open approach to standards, to data, to education. And one of the most important things is OSGEO is volunteer driven. So if you look about on a map of our members, you see we have passionate members from all around the world. If you look to Australia, New Zealand, Africa, South America, North America, Europe, all over the globe, you will find people who are active members of OSGEO. So how do we try to reach our goals? How do we try to empower everyone with this open geospatial? I think one of the most important things is open source. So as you all know, we are at FOSDEM here. Open source is a collaborative approach to software development. We try to support our open source projects or open source development. But we also have a focus on open data. There was a really, really important talk on one of our conferences a few years ago where somebody said, your software is useless without data. Data should be freely available. Everybody should be able to use it as he likes to or she likes to. Of course, we also have a focus on open standards because we want to avoid login with inter-rubble software. So in our world, we have a huge standardisation organisation. It's called the Open Geospatial Consortium or OTC. Maybe you heard about that, which is really important partner for us, but also other things. Of course, we are also keen on open education in order to remove the barriers to learning and teaching. Of course, with our software, we have a huge collection of workshops, training material, whatever you can find on our website or our wiki. So everybody is able to take that material and to start their own training course on our software. We have also a focus on open science because of course, if everybody shares data and software, this really could lead to responsible research. So this is also an important part of our goals. We are supported by sponsors worldwide. If you're not on the list, no problem. You can get on the list. We are open for sponsorship. Of course, these are just a few companies, but if you know at least a few of them, you can see we are supported from companies all over the world as well. And yeah, the most important thing is, OSGEO is volunteer driven. I said that before, but if you look at that picture there, you see hundreds of individuals. They all have fun together. This one was taken at FOSFORG, which is our main global event we normally have once a year. That one is very special for me because that was 2016 in Bonn and I had the honor to chair this conference. But what I want to say is all we do is we do it in a kind of doocracy. So whenever you're inside the OSGEO environment and inside OSGEO world and you have an idea, and you want to find people who share this idea, who help you with your idea, the idea is conform, goes conform with our goals. Just do it. So we are open to anybody who takes the initiative to do something. Yeah, this is how the membership in OSGEO works. So the most important point is point three. If you want to get a member, which is of course open and free, you just self declare that on our wiki. So you care that you get a wiki account and then you're a member of OSGEO. That's as easy as it could be. I'm not sure whether the number is totally up to date, but there are about 1,250, 1,300 members worldwide actually on our website. Once being a member or even not, if you're not a member, you can participate in our mailing list. You can take part in our events. Of course, every project is open for help. So you can send pull requests, write documentation, report bugs, all the regular stuff you can do on open source projects. Actually, we have about, or I think the number is from November last year, we had about nearly 36,000 unique subscribers on our mailing list. If you're a member and you can get elected to be a charter member, and as a charter member, your only duty is that you can then participate on the vote on the board elections. So once a year, we have the board elections. And to get a charter member, another charter member has to propose you as a charter member. Charter member trip is normally a lifetime thing. So from the, I think most of the charter members of the 487 we have for now are active members. There's the board of directors, which is Angela's, our president, and I'm also a member of the board of directors. We have, in the moment, we have nine directors in a two year term, which is one year we vote for and next year we write a vote for five new directors. So there's a vote every year. And yeah, I would say the most important thing we have is not the board or the the Hirachi we have in our foundation. Most important thing are our committees. So if you can see on the left side of the slide, we have a lot of committees which are more or less self organized. There's a code of conduct committee. There's conference committee which supports our mainly our main globally when we have once a year. The conference committee consists mainly of former chairs of former global conferences. There's a committee called due for all which cares about all kinds of open education. There's an incubation committee that cares about that if new projects come in to check whether they go along with our rules in order to get it incubated project and in OSG. We have a marketing committee. There's a marketing material spread the words or power our social media channels, whatever we have an open Geo Science committee. We have all the project steering committees which are really important. So in every open OSG project, there's one project steering committee that cares about the software related things of the various projects we have. There's the United Nations committee. And very important also the system administration committee the suck which cares all about our server infrastructures and stuff like that. So we deliver server infrastructure of course for our mailing list for our projects we host projects there and stuff like that. So this is what we all do. And of course, every of these committees also open. And if you have ideas and want to participate in one of the projects it's more or less just going to the mailing list and say hi, I'm here I want to I want to help. This is about our projects. So there's a link to find all of our projects. So we have two kinds of OSG projects. The one kind are all our official OSG projects in the moment there 21 of them. And they also already passed graduation through the incubation committee as I said before, and there are some rules you have to you have to follow and get in order to get an official OSG project. And the barrier, a little bit lower are our community projects. In the moment we have 24 community projects we are really happy to have new to new community projects who came into OSG last year. And there are many more projects are pending. And this is a normal way if you have an open source project related to due spatial if you want to incubate it to those to you. You normally start as community project and then you go the next steps in order to get a full incubate it's OSG project. Okay, that's from my side and I think now, Andriyla, it's time to pass over the ball to Athens. Thank you. So, let's, let's see a bit more details about our projects, these are the projects that have been graduated with, which means that they have passed all the quality criteria that the incubation committee is setting for each project to be able to graduate the graduation process. So we, we have several categories of projects in OSG like the, the special content management systems, desktop desktop applications, libraries, metadata catalogs, web mapping servers, spatial databases and we are, these are the main categories we have. Those, those projects have been very widely used. And once you find some of them in many, many important deployments over the world, like national, the SDI special infrastructures or open data portals, or even on daily, daily usage by important projects all around the world. So just one more comment on that we also have a project that is special to us is the OSG live project which we use for marketing purposes, but also for demonstration purposes and there's a dedicated talk about the OSG live project coming up later by Astrid. So in the OSG projects, you can find projects that are very, very old, like grass is over 30 years old now, but you can find newer, more, more young projects but the quality criteria for us is to have sustainable communities and be able to, to, to, to, to, for the projects to develop a long time. One more detail about the projects is that you assign an officer for each project so it's not just this project steering committee, but we have officers and those officers are responsible for reporting to the OSG board yearly. So that the OSG board had an overview of, you know, if there's a problem with the project or if there is a need of supporting either with funding or legal support or any other kind of support that we can provide to our projects for for projects that have not yet graduated the incubation process we have the community project projects the community program, where smaller projects or projects that have not yet reached the incubation status can can apply. And this is a very low barrier and point for projects, some of those projects have applied for incubation, some of them have not yet applied but that's, that's not the point the point is that we have a community and the community wants new projects and as technology goes forward. We are, we also need to have more products included. So, at this, at this moment, maybe it's a good time to go to the, to go to the website actually and, and show you a bit about the website of Osgeo. This is the Osgeo.org is our homepage. This is where you can find all the information about Osgeo, the structure, the resources, the, the, the project that we have. If you don't know which projects fit your needs, you can easily go through our wizard, we have a choose a project wizard where you can, you can start looking and we are, we are trying to help everyone to find an appropriate open source technology for their needs. This website will help you find the appropriate software and technologies available from the Osgeo stack. And also you can find the projects here in a list or you can, you can find dedicated project pages where we have a community picture or we can direct somebody to a demo of the project or even to the website of the official website of the project and, and you can find more information about this about features, standards that are implemented and who are the developers behind the project. Then we have several resources about Osgeo, like what, who, who is on the board and who are the officers. We have news events, all of these, all of this information is listed on the Osgeo website. How somebody can contribute to Osgeo, this is, this is one, one very important aspect of Osgeo since we are a nonprofit and volunteer based organization. If we need all the contribution we can get volunteers are the heart of the organization and without them, the organization cannot work. There are many ways that somebody can contribute joining and contributing to a project is the, is the first one on the list, because contribution to a project will will help the project grow. And, and this can be done with, with several ways, either with full request and code by sponsorship, or with writing documentation translations, doing presentations, filing bug reports. So, if you can join a committee is a very important thing because committees are the, the, the, the, are the groups that are doing the heavy, the heavy work in the organization. Then we have initiatives initiatives can be either joined or created we are open to create new initiatives if there's a group of people that are very interested in a special topic. Important things that you, somebody can do is host and organize events, like coach prints like conferences, and also we need help without rates, like people that can help us maintain the website, or actually do some some promotion of open source just special in governments organizations, companies, etc. Also, we have a partnership built up process. Also, another important thing is that somebody can build or join a local chapter, because OSGO is, is everywhere in the world and we have local chapters, some of them are official in terms of they have legal entities behind them. Some of them are not official, but they are local groups, local communities of people that are joining forces to promote OSG on a, on a local level. OSGO has obviously been affected by the COVID pandemic this year. And we, we were not able to do our international post for G this year. So, post for G is the global conference that we organize as still mentioned earlier. And we are, we are now in the process of planning the post for G 2021. And we have not yet reached the decision whether this is going to be an on site or an online event. Original post for G conferences showed up and happened during lockdown, mostly with online events, and many local chapters stepped up and made a local event. And actually, because it was virtual, more people were able to join all around the world. We also did in November a virtual code sprint. Now we are on on the process of organizing a new code sprint in February with Apache Software Foundation and also open just personal consortium. Now, a bit more information about the post for G 2021, which is going to be hosted in Buenos Aires, it's, it's planned to happen in late September. This year, there's a big local committee, local organizing committee that is working very hard to make the conference happen again. There's a website where you can already register and and tickets are already been out. And there's an open call for papers. So with that, I want to thank you. And I hope you and you are enjoying post them this year, and hope to see you in person.
|
The Open Source Geospatial Foundation (OSGeo) is a not-for-profit organization whose mission is to foster global adoption of open geospatial technology by being an inclusive software foundation devoted to an open philosophy and participatory community driven development.
|
10.5446/52535 (DOI)
|
Okay, so this is a spontaneous live panel because one talk didn't take place today, so we have the opportunity to chat with you and talk about Geospatial, open source Geospatial software, open data and whatever you like. So ask us some questions and maybe before we start we make an introduction. So some of us you may know already from our talks, but maybe not the whole background and I would suggest we start with maybe Enoch. That was on the spot. Okay, so my name is Enoch. I come from Ghana, but I'm studying now in Munich. I do a lot of open street map every day. I try to map something in open street map. Of course, I allow open street map. I use free and open source and I'm part of several like a couple of open street open source communities like OSU and so on and so forth. So yeah, I am open. Thank you. So maybe Vero, you could introduce yourself. Yes, sure. Okay, good morning everyone here in Argentina. It's 9.52 am. I'm Veronica, as I said from Argentina. I am part of the grass development team. But my background is in biology and I work in applications of remote sensing and GIS into epidemiology and public health issues. So basically I use grass a lot to process images and then create models and make risk maps for different diseases or distribution maps for vectors of such diseases. Thank you. Okay, Luca. The echo is my fault, sorry, but and I'm looking at the lucky I'm working in Italy in a private company in the research center. I'm doing a little bit of everything in GIS database, web GIS and a little bit of analysis and I part of the grass development team and I'm trying to helping other projects like sometime I translate something for GeoLive or something for grass and I'm part of the Italian community of open stream and GeoLive called GeFos. Okay, good. So, Angelos, maybe it's your turn now. Yeah, hello everyone. I'm Angelos, located in Athens, Greece. I'm the OSGO president currently and I'm involved in many OSGO projects. I'm the terror for GeoLive and I contributed in many GeoFifem, stack projects. I'm involved for many years in open source, geospatial and supporting open source wherever I can. My background is I'm a surveyor engineer and I also have a degree in remote sensing. Okay, so last is me. So, my name is Astrid Emde. I'm from Cologne and I'm in the OSGO community, very active. I'm in the OSGO board with Angelos and I work with a lot of OSGO software in my daily life. I work as a GIS consultant in Bonn and we do Geoportable solutions and through this work, I'm here to know all these great software and I'm active in the OSGO live team. I am in the NAP and the team and yeah, it's great to have all these solutions and to work with all these solutions. So, I'm going to try to do courses and try to spread the knowledge about the software around the world. So, let's see what we could discuss. We have no questions yet. So, maybe we discuss about the impact of our regional conferences in our community. So, for example, in Germany, we have a big conference or not in Germany, but in the German language community, the local chapter, the German language local chapter, we have a yearly conference, it's called FOSCUS conference and it has more than 500 participants normally and I have the feeling that these conferences are very important to bring people together to learn about the software and to spread the idea of open source software and our tools around the world and our FOSCUS conference is together with OpenStreetMap community. So, we spread software ideas and also OpenStreetMap around the world and it would be interesting to know whether you have this as well. So, we heard from Wynok about state of the map already, but Bero, maybe how is it in Argentina? Yes, here we, well, we know how I think since last year officially or the year before, the local chapter, like the Argentinian local chapter and we are actually organizing the international FOSC for 2021 in Buenos Aires last week of September, first day of October. So, registers and your papers, till now we are thinking on a presidential meeting, let's say in person and we'll see maybe March, April if we have to switch to online or not according to how this pandemic evolves. So, the community here is growing. We started, it started mostly in Buenos Aires, a small group of people working in the National Geographic Institute and the space agency and so on and then it started growing. There are more companies now doing geospatial developments and also different governments or city governments creating their special infrastructures and so on and so they are like requiring, let's say, more and more geospatial knowledge. Yeah, and then in my, so I work in the space agency as a researcher and there we use all pre-software, free and open source software from OSTEO basically for our classes and all the products that we develop are with open source tools and open and so on. So, we try to push that forward. That's what I can tell so far. Great, so we are hoping to have a force for G within this year in Argentina. It's COVID permits, right? So, yeah, but at the same time we are still trying, OSTEO is trying to make more online events. We had the annual meeting earlier this year. We had lots of local force for G happening which, by turning them into an online event, they stopped being local anymore. So, it's easier for people to join and instead of traveling around the world now it's easy to join a force for G so personally I was able to join force for local force for G that I was not able to join in the past. And also we have been, we unfortunately were not able to do the coach printing Athens in 2020 and that became an online event in November and we are still trying to do more online code prints and by the way we have a joint code print with OGC and Apache foundation with OSTEO which is happening on 17th of February so I'm just sending the link on the main channel so that people can register. So, please register and join us. We are going to work, three organizations are going to work together, many projects around GEO special, it's not only about doing open standards there, it's going to be regular code print projects can show up and work on what they think they should be working on. Obviously it's a very good opportunity if you want to interact with those three foundations so if interested in doing work on standards but if you are an OSTEO project or an Apache project then you can obviously talk with standard working groups, members and people who are involved in all these foundations. So, yeah, we are going to try and do more online events until we can meet face to face sometime soon hopefully. Do we have any questions, Tuasir? No, not yet. No, I think you spoke about how to join OSTEO so that is fine, yeah, but yeah, maybe you just invite people to OSTEO Live, yeah, and if you see OSTEO Live not in your language, yeah, as I also try to contribute to OSTEO Live and OSTEO Community, we invite you to join us, yeah, translate OSTEO Live into your language as well, yeah. And we are open so feel free. Yeah, we are in the process of doing OSTEO Live 14 release. Sometimes soon we have already entered in beta states so that means that things are getting pretty stable around this time of the year. And we are hoping to make a release within the next couple of months, maybe sooner, depending on how much feedback we will get from testers all around the world, we do have a beta version available for everybody to download and test and provide feedback. We are obviously asking people to do translations and improve the documentation for OSTEO Live. And even projects that are willing to start working on their next, being included in the next version of OSTEO Live, it's never late or too early to join the meeting this and ask if somebody can join or send a new project. And as in all the community stuff that we are doing in OSTEO, we need all the volunteers we can get. It's always a matter of having people volunteering and doing the work in all our community committees and working groups. I think Andalus, that's well said, to invite more people to join and maybe the community sprint in February is a joint sprint is a good occasion to join because we have maybe some little bugs left for version 14 of OSTEO Live and could collaborate to work on them. Yeah, maybe I just talk a bit about the state of the map Africa once again CSR if you're watching us. Yeah, we plan to have state of the map Africa just like state of the map global. We are like minded organizations, OpenStreetMapFoundation, OSU, we are all in the open sphere. So yeah, in November 2021, we plan to have state of the map Africa last two years that was 2019. We are fortunate to have Steve Coast in West Africa. He stayed with us for the entire time of state of the map Africa. It was interesting one. So yeah, we need some support. We need some sponsorship. Yeah, if you can just get in touch and we are still preparing to have it remotely if things don't turn out to be how we expect it to be. Yeah. Thanks for inviting again. Welcome. And I think the great goal of OSTEO is to bring people together and to bring projects together. So in the last 15 years, we try to act as an umbrella organization which cares for projects for communities that build up local chapters. And we have our collaborations with other partners like the OGC or the use mappers. And yeah, our organization lives from the people that form the organization. And we hope that we can provide a good infrastructure and communication channels to support the people and the projects. And who's he chatted in the chat that there's for example, the Netherlands local chapter, they are quite active. And they have a great show that they have. I think it's monthly. And there you can join and take part at the quiz and get to know other people from your local community. So every local chapter from every region can do great things and it's a bureaucracy and you can define for your community what sort of projects you would like to work on. And if you would like to join with your project, you are welcome as you might have heard in the morning in the presentation from Angela's and still we have over 20 projects already in OSTEO but you still could join there's an incubation process that you have to follow and then after you pass the incubation you could be part of the OSTEO community and profit from all the advertisement that we do from all the conferences and supports that you get as an OSTEO project. And if you are a company and would like to sponsor with you, you are welcome as well. You could sponsor our great post for G events. And you could also sponsor always directly and we try to provide the money then and give the money back to our community, to the projects, to the local chapters so they can work on great things with this investment. Like for example for Swagin Argentina. And you can see our website I think. Yeah, Angelo's are you sharing your screen. Yeah, yeah, I'm sharing so that you know people can see how to how to find the projects. How to find information about the projects about the graduate projects or the community projects and and find more information about OSTEO, like how to sponsor how to how to become a committee member. And what is what what is the organization of OSTEO and general so either the website is a great source of information. We can find more details about how to contribute. What is the code of conduct, how you can become a member and and get started with your contributions in open source and special in general. So, yeah. We have a get it getting started page maybe you can show that as well until us. So there's a link for the first steps. We describe how you can get started you find it at I think community. The second from the top getting started. So, if you go there we describe step by step what you can do to join our community so you can become a member you can check which local chapter you may belong to. You can explore the wiki you can join some mailing lists and find out which activity would like to do in OSTEO and where you would like to get active and then contact the people. And it's a very welcoming community so people are friendly on the mailing list and communication so don't hesitate to get involved. For example, if you are a company, it could be interesting for you as well to to get active in OSTEO because we have a service provider page. So, where it is you could list your company and people could find out what service you can offer regarding open source software. So, for your company it's a good advertisement and if you are looking for a service provider this is a good point where you could search for some skills that you are looking for. So, this is open for everyone so you could register your company there and you see we have a lot of companies registered there already. And if you then decide to become a sponsor that's great as well. So, we have this sponsors page where you can see that we have great sponsors already. And, yeah, you will be listed here and profit from the visibility and we have a page as well where you can find out how our sponsorship works. You could also sponsor directly via PayPal or our GitHub and it's a great way for you to give something back to our projects that provides free software for you to use. And we should just have that we have a policy that if somebody is sponsoring an OSTEO event like the Foss4G or a code sprint then we actually we take this contribution into account for the sponsorship level. So, if for example somebody sponsors Foss4G with an amount of money that can be listed under for example Gold Sponsor or OSTEO then we will add you as a Gold Sponsor even if you have just sponsored the event and not OSTEO directly. So, we are trying to list everyone that is contributing to one of our events or even to the project directly. So, this is a summary of all the contributions here in the OSTEO sponsorship. So, maybe we could change the topic and share our experience in education. So, for example, Enoch, you are at university and you experienced different universities and maybe others of us as well and what do you think how much open source software and OSTEO software is already educated at universities. What are your experiences there? Well, I would say, yeah, of course, OSTEO projects of course are used in my curriculum, even though the proprietary counterparts are used as well. There is always, it's not completely dependent on proprietary. So, you see QGIS, GDA, GRAS, we use them. For course, I'm taking, you see QGIS is the last one we're using for remote sensing and some stuff. So, yeah, OSTEO projects find this way everywhere. And I personally use it outside school for my consulting, freelancing, almost everyday. I use OSTEO projects. That's how come I also try to contribute a little in one way or the other that I can do with my time. Okay, good feedback. Yeah, so maybe Angelos can tell us who is an academic, so Professor, I can tell us how he uses it. Well, yeah, I'm both in the academia and the private sector. So, yeah, I have been using the OSTEO Live for teaching classes for more than a decade now. It's so easy to deploy on systems and students with the tools to do their work. It's also easier to reach places like to distribute in conferences. Yeah, it's a great tool. But at the end of the day, it's people that are sharing experience that make the difference. So, it's important for people to share also their experience in the software and showcase their expertise using the tools. So, it's a great way to introduce people, but it's so flexible as a tool in academia and easy to deploy. I have seen OSTEO Live in places that I have not expected, to be honest. I recently found out that OSTEO Live is distributed directly in a Diaz infrastructure directly for RISA. So, it's been used in wider community in GioSpatial. The education is a very, very important aspect of what we should be doing because that's how people get familiar with the software and the values of open source and the ethical part of being open and free. Yeah, and I think 2019 too when I was talking about OSTEO Live as OTM Africa, I met a professor from Cameroon who was interesting. They use OSTEO Live for most of their courses or trainings they offer. So, it was interesting and that was when we had an input that if it's possible to be able to save items like have the USB keys that is mounted in a persistent mode. And it was interesting to get feedback from those who are using it for education as well. My experience was with students. We have internships at the company I work at and often people come and they don't know about OSTEO software and I'm very astonished because I thought, okay, we should use it at university and get to know it there already. But yeah, after some weeks of their internship, they're really happy to get to know the software stack and work with it. So, I think it's OSTEO software arrived already at many universities worldwide, but there are still some left where maybe it depends on the people who teach them where they, it's not, maybe it's not part of the curriculum yet. Yes, indeed. I think it depends quite a lot on the person teaching, on the teacher, let's say. But also, for example, here, so in the space agency, we have this agreement with the university and we teach together, let's say, a master of science program. And in the last two years, we started shifting everything to open source. On one side, because of the philosophy of open source, but also because otherwise the first class was to teach the students how to crack this software. And that is just come on, no, you cannot go that way. So let's directly, like, do the shift. And let's go open source. So everything, all the computers have Linux now and we shift like so. Yeah, this year, yeah, grass became part of the, of the subjects, let's say, of the of the master science program to process time series, to process, imagery, and so on when just before the years before they used to use envy for itself. And so we are teaching of your toolbox and grass and of course, good GIS and so on. So, yeah. Great. Yes. And they are all on OSU live so people can go and work with OSU live and have the whole stack. Yeah, exactly. Exactly. And when, when it is so on classes before, let's say, they work on the on a classroom where all computers have a boom too. So we have like this idea of department and they said everything. And now, when we teach online. So last year, it was more. Okay, please. You have these options. You either go to your life or virtual machines or try to install the software on your windows, but we are using this. So, and do you think for windows is also really cool. And, but do you think students like Linux distributions after they got to know it or is it a big challenge for them to work with you. Yeah, if they are just, how to say, like, very used to click some buttons and so on. Maybe it's a bit difficult in the beginning. But then, when they see that it is much smoother. When you use Linux and you have control of the things that are happening there. Eventually they do the shift and the switch. So, and last month I got one of the students from this previous course. Hey, very, how do I do a dual boot because and which Linux distribution do you recommend. Go, go with federal. So, yeah, it's a little bit, but it's like a whole conversion. I'm pretty sure there will be a talking about them about, you know, on the desktop. It's happening every year right when it will be the year of Linux on the desktop. I don't think we can answer that. But I think the students are getting more familiar with the interface. So basically, they're, they're, you know, it's wider use than it was 10 or 15 years ago. So, hopefully, it's not a barrier for students to use the software as a slightly different interface that they're used to. So maybe just to add to what very when Angelo said, yeah, it depends on the lecturer, the professor, the one in charge. So I had one experience, it didn't have not been as in lecture in academics, because a training. The client is like, okay, why not use this one as like, okay, if it is this software, I am not going to teach, you can look for another. And it's like, okay, then we are okay with it. And so sometimes it depends, it always depends on who is in charge. So I said no, I'm not going to teach a private software if that is it, then I am not interested as a okay, then come teach this one instead. And those QGIs, of course, so it always depends and the crack stuff is not a secret. It's all over. It's everywhere. So not only in Argentina. We have a new guest here. We die. I hope I was right. You will be in the panel soon with your talk as well. And we have this live panel now and discuss about we see you and how it is in use at universities and the world wide and maybe you could introduce yourself and share your experience. Sure. Can you hear me? Yes. Okay, great. Great. Yeah. I tried to find a way to join this conference. It was kind of tricky. So anyway, yeah, I'm teaching a geospatial science and computing at the University of North Georgia, and the state of Georgia in the United States. And, well, we mainly teach, unfortunately, as a res object as pro. So I do have a question about this, actually. So from the students point of view, maybe more beneficial for them to run as a result here because a lot of employers they use object as software in May. So when they join their employer, they may have to use up mad objects pro, whatever proprietary software. Now, I'm the only one who can teach. QGIS maybe grass. But I'm kind of, I don't know all my colleagues they all use up to us for teaching. So if I started teaching grass or QGIS, they may get lost. I'm doing here and the UI is very different from as a software. So I was wondering how you can handle that or even even teach as a res of their at all. And you teach JS. So I have the feeling that in Germany, for example, it changed that if you are looking for a job, you are asked to know about you self as well and not it's getting less and less that as res of her is so important. And at some universities, they don't teach only the software. They teach the concept. So maybe they will tell you about the section or buffering and then they let it up to you whether you use as a software to do it or QGIS or grass is or whatever you want. So the important thing that you should learn as a student is to learn about concepts. The end of software behind it. Maybe it doesn't matter if you know the concept. Okay, yeah, I think that's that's good idea. In the US, I think it's more like a US problem, I guess, then. And all the other companies they want to use non open source because they want to blame or something if something happens. They have to. They should have someone to blame on their problems. But are you involved in the OU community with your university or in due for all with all the labs at university. I'm a grass developer. Yeah, but my, my departments is not really open source. Friendly at this point. So maybe most of my colleagues there, they are more like scientists than then developers like me. So they just use as this tool for their work. So they, they want to teach more as we products than any, any other open source products. So that's kind of my dilemma at this point. Or that's what we discussed before that it always depends on the people. And you have, yeah, you need people that push forward open source. In your institution and. Yeah, to make it happen. I know your team is not. I'm trying to push at least I'm trying to push. Yeah, so for me as a student, the university doesn't know the department made it easier that the tutorials, maybe in aggis pro but the professor doesn't care what you used to accomplish it so yeah, he uses aggis to do it and I use qgis. And I use qgis for the tutorials and still, I still get the same results so there's flexibility and then it depends, like you said so. Yes, and you'll see you can. I was, I was about to say that the change happens gradually so yeah, I mean 10 or 20 years ago, people were using only proprietary software for doing their research and now you see with Jupiter notebooks Python passing up on machine learning, you know, it's becoming mainstream. So it's a matter of time until people realize that they can combine this open source software with their paper they're open, the open science do open publications. So, at least on my department, this happened slowly but, but effectively and at some point, you know, even teachers who were not in the open source group, let's say, they started realizing that you know, students were asking questions they knew the software exists they knew from other classes that you know they can do something for free instead of having to pay for thousands of dollars when they become professionals. So, it's the options is the variety and being able to show students that there are options is something that brings the change. And even with one class being being being taught in a university, people will know and they will, they will ask for more open source at the end. And also it's important to, in my opinion, to try to create open source groups, because those will attract students, students get fascinated by, you know, things that other students do not only by seeing this. They see this feature in my stuff. But if you create a group of students that will commit to an open source community within your university, then more people are attracted that easier because it's, you know, year to year. And that is what I have seen in my university. We were able to create a strong group of students that could interact with other students. Yeah, I think, and Anjula is here right with this, this idea. And we will hear a talk about the use mappers from Laura later this day and then you can see how how effective this is spreading the idea in between the students. And, Johan, you joined us here. Would you like to say something as well? Okay, I'm not sure. So, let's see. And we are nearly at the end of our panel discussion now. We are looking forward to the next talk in some minutes and have to wrap up now. It's great to have you here this panel to discuss a bit about you and the position of OSTU at universities and hopefully we can go on with the discussion and see you at other events at the FOSDEM or online events worldwide or hopefully in Argentina. Yeah, and hopefully in Argentina. Yeah, I think the whole community would like to meet again in person because we can compare online events with a face to face meeting. So hopefully this pandemic will be under control soon and we all will stay healthy.
|
Panel discussion or live demo tbd. A talk was cancelled so we have a free slot.
|
10.5446/52536 (DOI)
|
Hello, I'm Yazerev from Rift. I'm not very far from my region, which is Russian term for residential area that is usually separated from other regions with wide roads or natural features like parks and rivers, which is very small. You could walk across like 10 minutes and then you can dance and build up. Because my residential area has around 40 buildings, apartment buildings 10 to 25 stores high. So, here in this dance area is pretty interesting and twice, but it's also interesting in terms of commercial activities. Because almost every building here has a separate floor dedicated to commercial spaces. And there are three or four shopping centers in the area. So, when I started living here and even after two years of living in the area, I still didn't know half of the shops here in the world. And that was frustrating because I'm a member. So, my urge is just to map everything, to help other people find, discover new shops and amenities. But I didn't know how. Since I'm from Opel Streetmap, I could survey these shops and amenities in Opel Streetmap. So, people would open, for example, Mepsly and find these shops. It's not a perfect solution because there are so many shops that will just overlap and others. So, it won't discover anything. And making an Opel Streetmap. Well, Opel Streetmap has been around for 16 years, but it's still making a map, creating a map. Adding new venues is very simple, straightforward, but keeping the map up to date, that is still virtually possible. I could make an interactive roadmap. Somewhere on the web with markers, with some discovery features, with maybe editor, that might be better, but no. Because people know nothing, people wouldn't know my website. I would somehow make it known. And when you need to use it, you will have to not forget it. And also, when you need it, it's not usually at home, on the desktop. All that you will have with you is mobile phone. So, you will need to open it on a phone and it will need to be comfortable to use. So many ifs. None of the options are great. So, I didn't know how to map everything. So, I didn't. Until last December, when in one of the television groups, for my area, I noticed the link to Telegram bot, which was basically a directory of shops in the Minisys. You, for example, click shops and then close this. And you got all the closed shops in your area. Well, not much, because they both had very few data, little data. And it was very rough, basically, draft version. But when I saw it, it was like a major breakthrough for me. I suddenly understood that what I wanted could have been done. What I suddenly understood, like the whole process that I need to do, like how my map for the area would look, how it should be kept up to date, everything. I suddenly knew I had to make a Telegram bot. Basically, what I learned is that to make some public tool, you need to use the same media as the public uses. If everyone in your area are in Telegram, then you make your app inside Telegram. You do not make them install anything or remember anything. It's right up there. So, a Telegram bot. It's very peculiar, user interface, exactly. Basically, all you can do is send some words, receive some words, send pictures, receive pictures. And for a Telegram bot, you also can make some buttons for using it to click. And that's all. There is nothing you can drag, no markers, no JavaScript, nothing. Just sending and receiving text and images. So, how do you make a map in there? Well, giving the input box. The obvious thought is to make it search for things. Like, you enter pharmacy and you get a list of pharmacies. Basically, what I made was a geocoder, which is a bit funny because last year it was talking about reverse geocoder and now I built a forward one. It happens. So, we got all the keywords. We got list of pharmacies, you click on one and you get a card for the pharmacy with all the data, like open hours, phones, comments, maybe. I couldn't help but stitch some websites together and place a marker. But most people actually do not understand maps. So, how do you work with it? Well, I also add two photos. One, how the rainy looks from the street, so you're not lost. And one, how it looks inside, so you show you are inside the right room. And the buttons. Making a terry on bot has maybe considered emoji. I didn't think they were useful. They don't look serious, but when in your interface all you have is text, then emojis are a great replacement for icons. Basically, emoji are a set of icons everybody can use in text. If you have buttons with a string, you have buttons with icons on the string. That's great. Emojis are great. On the street, I often am asked, where is some building? Because with this dense built-up area, addressing is not obvious. We have a street splitting two in different regions. So, to remedy that, I took a piece of satellite imagery, draw addresses and labels on top of it and edited it to Terry and Bot. So, with just one click, you have a map of every app which you can use for your nation. And also, which is the thing I'm most proud of, I believe, is I surveyed every entrance in the area and with my bot, now you can type in address and type in apartment number. And it will not only show the entrance you need to use with its photo and location, but also it will tell you which floor you need to go on the elevator. So, this is pretty great bot. It just needs one thing to function which I didn't have. And that is data. When you get data in that amount, Overseat Map had like a couple dozen soldiers at the time. And there's no other source. You have to survey it by yourself. And to start, maybe you need to drop some points on the map for buildings, for entrances. And to do that, I used, well, not the solution. One of the open source map users, U-Map, mainly. And in there, it seems pretty simple. You just click add marker, you place it on the map, you type some name, type some attribute, save, add next marker and so on. And so on and so on. And I got so tired of it just after a couple dozen points. Because clicking, clicking, clicking, moving a mouse around not very comfortable. And with the density of points, does everything overlap, say anything. And it's not just specific to your map. Every point editor on the web or on the desktop, like QGIS, like geogson.io, they all complex, hard to use when you need just one simple thing. But since I was already coding, why not make another two? So I made a geogson point editor. And with that, adding points becomes fun. Just open it, double click to add a point, click a button to delete it, to add attributes to just use a text here. Where you can write and play text, namespace something. And you can use it to copy base attributes from one point to another. And basically that's all. An expert at geogson. It's very fun using this simple editor's greatest experience. And I used it to add all buildings, all 73, I believe, entrances in my area, and around 100 shops and amenities that I took photos. That was a great start. But then you cannot get much info on shops and amenities around you without actually going out and surveying them. The next step is surveying. Going outside. And I cannot take my amazing point editor outside. All I have on the street is my phone. And what's else on my phone? My telegram bot. So I added, obviously, an editor to the telegram bot. It sounds simple. Just there's a button, add new amenities. When I'm near one, I just type it, type its name, send this location. Because in telegram, you can send location to the bot. And I was personally surprised that Android phones determine your locations and buildings pretty accurately. It was very handy when surveying shopping centers. So, yeah, you invent some keywords and fill the long list of attributes for when you're starting from description, comments, open hours, links, phones, its address, its floor, and so on and so on. I cannot accept cards. It does have Wi-Fi. Many attributes. You cannot get it anywhere except for going to the venue and just checking it. And on the phone, it's pretty fun. You just type, type, type, type, type, type. Then open a camera, make a photo, make a photo, from outside from inside, send it to the bot, click save, and the venue is already in the database. You can forget about it because it's already there. And this is very important because when you have two set process, you basically make a photos of everything and then you have thousands of photos and you then come home and this amount of photos, it weighs on you until you sit down and process it one by one. And it will be very tiring because of the sheer amount of photos, sheer amount of work. So, if you can reduce two step process to one step, do it. It is very, very important. Otherwise you get tired. And just looking around, serving is fun. Just why there are so many mappers in the post-it map because mapping is fun and processing data is not. So, I serve it 60 to 100 shops and amenities a day. Why so many? Because every day I came back and I looked at my notes done during serving and improved all the little things that motherly were editing. For example, enter an HTTPS, column slash slash, on a phone is hard. So, I can make machine do it. Or instead of entering an address, I could just choose from a few options. So, I polished every small thing. So, the next day I collected things faster and faster because never make a human do things that a machine can do. Always think how you can improve user experience because it can take five minutes of coding, but it can slash half an hour. Sorry. In the past week I have served roughly 400 shops and amenities. And that's a lot. If I knew there were so many at the start, I'm not sure I would have started this work. But now the work is almost done. I have left roughly 100 in one of the shopping centers. And that's my plan for the next week. It will take a day, maybe two, I hope. And the bot is virtually done. And it enjoys grouping 10 users a day. 10. Because via that, it's public too, managed to be used by everybody in the area and guests from other regions. The issue is with marketing. Writing a bot is not enough. Or writing documentation, surveying the area. Again, not enough because people need to know about your software if you're making it not for a closed circle of people who know each other, but for everybody. You have to market it. Right now I've published info about bot only in a couple of telegram groups. So roughly 300 people know about it. And I plan more. I plan to contact major media below us, so that I get more people and maybe distributing spades of flyers around the area. Because I need users. Because having users means more eyes. Maybe I find some common durators to keep the database up to date. Because there's no sense in having up to date databases, people are not using it. So first thing is about open source. And with that contribution in this thing, open source, can I install it for my built-up area? Residual district. And of course, yes, everything I was talking about is open source. There's just one small thing to overcome. The bot, all strings, and all the documentation, there are lots of documentation I spent nearly a week on writing, they're all in Russian. So you either understand Russian or you work with me, like write me, to maybe speed up its visualization and speeding up its installation. While you're serving your area, I will be translating this. But yeah, it's open source. Open source is great. Because right now, all you need to have are an idea and time. When you have both, everything else is simple, like all the building blocks up there. There was this great TelegramBot library, Irogram, thanks to the authors. There was a pillow for each processing, there was a skylight, which is great database, better than it looks. And I just had to pull these blocks together. And I got my TelegramBot. That is awesome. And if you have ideas and or if you have time, help make more great things. Just code. So to reiterate the ideas for this talk, first and most important, when making a public tool, use the same medium as your target audience uses. When you sell, it's better to sell ice cream on a beach than at a museum. If everybody's in Telegram, do it in Telegram. If everybody's on Facebook, Facebook what it is. Then don't make human do things that the machine can do. If you can pass part of the load from human to back end, front end, do it. Because human resources are limited. Machine power is infinite. Emoji are great for user interface. Because, well, for test-for-interface, they replace icons. That's very important because after a time, people won't read the words, they will just look at icons. And coding is much less than half of the job. You also have to collect data, to write documentation, to market your software. And that will take a lot more time. But the great thing about it is it can be parallelized and it can be postponed. So, yeah. So, do your things and thank you for listening. I am Elia, I'm L<|pl|><|translate|> I'm Elia, I'm L I'm Elia, I'm Elia, I'm L I'm Elia, I'm L I'm L But, yeah, so technically it is possible. It is all compatible to OpenSeedMap. But right now, this bot is made for serving by yourself and storing your data in your own database. Yeah, you told us about your motivation, but I still want to ask you what pushed you to actually do this because you are putting a lot of information in the community and for yourself and for everyone. Why did you decide to choose Telegram? Of course, yeah, maybe the people in your community use Telegram more, that is why so. But you can just tell us something again. Well, as I said in the talk, when you make something for the public, use the medium that public uses. And virtually, everybody in my area is using Telegram. So, Telegram is great, not because it has great user interface or anything, but when you make your app inside it, that means people will
|
After moving to Minsk, I pondered on making a local map for my neighbourhood, with all the shops and amenities. People would visit it on the web and see where are things. Two years passed, I didn't make it. And only in December I've got an idea that would work. A community does not need no maps. What it needs is conversation. So I made a map app 2.0: one that doesn't rely on 1) web, 2) maps. Of course inside it's all about geo.
|
10.5446/52537 (DOI)
|
Hello, welcome to my talk, I'll accumulate efficient computation of hydrological parameters in grass. First, a little bit about myself. My name is Hidecho, I'm teaching geospatial science and computing at the University of North Georgia. I have been a grass GIS co-developer since 2000 and have 12 years of ArcGIS development experience and 10 years of water resources consulting. You can find my Git repositories from this GitHub link and let me know if you have any questions. You have my email address there. Also, the source code of our accumulate is available from this link. Let me introduce you to the web-based hydrological modeling system called WiderMode. The basic idea is to integrate our time model, which is a grass module for the topography model, stream flow data from USGS, and weather data from NCDC into a seamless web-based modeling system. So, here is the proof of concept implementation of WiderMode for Texas. It's on the GNU of Federal GPL license version 3. I try to implement a transparent system so the user can inspect my source code and modify it if they choose. I used open source stack, including grass GIS, Postgrass equal, PiWPS, MapServer, OpenLayers, and Apache. Initially, for base maps, I used Google Maps API, but because of their licensing changes, I switched it to Bing Maps API. So, this screenshot shows what the system looks like. Here, all these dots are USGS gaging stations. Blue means those stations have been processed and are ready for service, and red means they have not been processed yet, but the user can initiate processing. Let's look at the typical workflow for top modeling. I used a national elevation dataset or NED to delineate watersheds and calculate longest flow paths and wetness index from inside grass. Then I downloaded weather data from NCDC to create input files for our top model, which will simulate streamflow and generate simulated streamflow time series. Then I downloaded observed streamflow time series from USGS and evaluate the objective function. So here, my goal is to minimize the objective function by adjusting the model parameters. This is a repetitive process. It's called parameter calibration or model optimization. Then what about the wider mode way? Well, I try to automate all these manual processes and simplified workflow. So basically in the web app, you zoom into your study area and click on USGS gaging station, then the wider mode will do the rest. But this calibration process is not yet implemented, but I have a plan for that. What about the data flow or request flow? Here, the end user makes a request, then piwps passes the request to grassjs, then grassjs pulls the data from post-grass sequel or its own database and delineates watersheds, longest flow paths, and all that. And then it downloads data from USGS and NCDC and creates input files and runs our top model and then produces output files. Then maps over graphs, grass, map data and passes it to open layers so that it can draw the map on the user's browser. Again I use Bing Maps API instead of Google Maps API for base maps. Now there are some challenges in web-based modeling. Time. Including myself, web users are not patient. I don't want to wait for more than several seconds for my request to complete. But it takes time to download online data, process DEM, and create model and simulate it. So I categorized USGS gage stations into already processed, which is blue, and unprocessed, which is red, and I have a cron job that automates heavy data processing daily overnight without any user requests. But at the same time the system also allows the user to initiate new processing so later they can share their modeling results with other users. But all this is just preprocessing of data. And it requires a lot of data storage on the server. And it's slow. So the ultimate goal should be to improve the performance of geospatial computation so that the system can respond to the user request on the fly on demand without requiring any heavy preprocessing of data. So those important hydrologic parameters include flow direction, accumulation, and longest flow pass. What is flow direction? Well, the direction of flows. Based on these nine cells, including the center cell and eight neighbor cells, based on the elevation of these cells, you can determine a single flow direction into the steepest slope. These numbers are all drainage encoding for grass GIS, drainage map. And flow accumulation is a roster that down traces raindrops following flow directions. Or sometimes you can up trace from the outlet. And the longest flow pass. Here in this figure, you see some of these sub-water shells, and then the longest flow pass in blue. And the red point is the outlet point. Then what is a flow pass? Well, flow pass is water cores from one point to another. Then the longest flow pass, simply, is the longest flow pass. But there can be more than one longest flow pass in some case. If you have some trivetries with the same lengths. Now what was my motivation for this research and implementation of R-Locumlate? The current LFP algorithm was developed by Peter Smith almost 25 years ago when the data was really coarse. And this algorithm produces grid output, which can be potentially dangerous in terms of generating valid hydrologic outputs. I'll cover that later. And it takes a long time because it involves a lot of roster computation, especially when you have a lot of watershed. So this algorithm is not really for a large number of watersheds. So let's look at his method. He creates a downstream flow length grid first. What is that? Downstream flow length starts from the downstream end at the outlet. The distance is zero. As you travel further from the outlet into the upstream side, then the distance grows. And at a headwater cell, you have the maximum distance on that flow path. And then here's a calculated upstream flow grid. Here, this is opposite. You start from headwater cells, distance zero, and then as you travel down, distance grows. And at outlet, you have the maximum distance. If you add these two downstream and upstream flow length rasters, then you'll get the DFL flow plus UFL roster where cells with the maximum cell value create the longest flow path in grid form. And you simply collect all those cells and convert those cells to vector. Then there is the vector longest flow path. So this is the typical procedure for handling multiple watersheds or outlets. You have one loop, and inside that loop, there are a lot of raster computations. It takes a long time to process a lot of multiple watersheds or outlets. So it's not good. And there is a critical problem too. So in this figure, these green arrows all flow direction arrows, and the blue line is the longest flow path. And the upstream side of this figure, you can see the blue line follows the green arrows really well. And then at some point where you have lumped cells, a suddenness skips one cell and jumps to the next cell. It invalidates the longest flow paths altogether. Why? Because you don't have any direct flow paths between these two cells. Why is it happening? That's because raster 2 vector conversion tool is not a hydrologic tool. It doesn't understand flow direction. It doesn't understand hydrology. It just simplifies the final polyline output. So it just eliminates these vertex and just makes a shortcut. So that is a really, really serious problem. Because final output is not hydrologically valid anymore. And there are other problems. Again, it's slow. And S3 has the flow lengths tool, but it has limitations. For example, to calculate downstream flow lengths for multiple outlets, the tool doesn't take outlet points. It doesn't take watershed polylines. So you'll have to clip the DM to each and single watershed boundary and run the tool individually. It just takes too long. So I employed a divide and conquer approach where our accumulator uses a purely vector-based approach. First, divide. I divided the problem of finding the longest flow pass into smaller pieces in a recursive way. So I redefined the longest flow pass recursively. Here the longest flow pass at cell i is the longest flow pass at cell j, one of upstream neighbor cells of cell i, plus the flow pass from cells j to i. And I defined the function of j that returns the length of the longest flow pass at cell i. Here I used the same definition from here to define the function f of j, the length of the longest flow pass at cell j plus the length of flow pass between cells j and i is the longest flow pass at cell i if and only if there are upstream cells. What if there are no upstream cells? Well, we have reached head water cells. So we can stop there. There are no more upstream paths. So we return 0. Now the problem became finding all arguments j that maximizes f of j by traversing upstream cells from the outlet and search stops when there are no more upstream cells. Now since this approach is recursive, programmatically you can hit limitations in the stack memory and your program can easily throw stack overflow issue or second fault errors. So to avoid those potential issues, I try to eliminate non-candidate cells as early as possible. So for that I defined two variables called the longest, longest flow lengths and the shortest, longest flow lengths. So let's say I have two upstream cells i and j. I compare those two cells, minimum longest flow lengths and maximum longest flow lengths. Let's say cell i's minimum longest flow length is longer than cell j's maximum longest flow length. Then what does it mean? Cell j cannot be a candidate for this longest flow pass discovery because its potentially longest longest flow length is still shorter than the other cell i's minimum longest flow lengths. So I can eliminate cell j very safely. Then how do you know what the longest longest flow length is? Well conceptually if you align any upstream cells of the current cell diagonally then those cells will produce the potentially and theoretically the longest longest flow pass because the diagonal length of a cell is always longer than the horizontal or vertical size of the cell. Then what about the shortest longest flow pass? It is based on Hexelow. If you are interested in derivation of this equation you can check my publication but the minimum longest flow length is s times square root of FAC here s is the cell size and FAC is the flow accumulation number. Even with this elimination process I have still hit one stack overflow issue from a very large watershed. So depending on the size of the watershed this recursion can consume all stack memory and cause a stack overflow. So the ultimate solution is to convert this recursive algorithm to an iterative algorithm so we don't use limited stack memory and instead we use a heap based stack. So I implemented my own stack in the heap memory and I have a single wild loop and I find upstream cells and push them into the heap based stack and pop one at a time and find more upstream cells and push them and pop until the stack becomes empty. So an empty stack means there are no more upstream cells. So you can stop there and then you can down trace from the headwaters cells. So benchmark experiments. Probably I may be wrong but I assume that okay commercial software must be better in terms of performance. So I wanted to compare the performance of R-accumulate and that of Arc Hydro. I use this data. I downloaded 27 NED tiles and patched them and then clip it to the Georgia boundary and generate 100 outlets randomly. I used R-accumulate and Arc Hydro to calculate the longest flow paths for all those 100 outlets. System specs. I used a server CPU and memory is a little bit higher than regular laptops. It is 48 GB and Linux kernel 4.4.14 and grass 7.7 SVN. Resultant discussions. No matter how fast your program is, it has to generate valid outputs. If it generates invalid outputs, then it doesn't matter how fast your program is. It's just wrong. So I compared outputs from R-accumulate and Arc Hydro and surprisingly I found that Arc Hydro produced the wrong not so longest flow paths. So here in this figure, the red line is the Arc Hydro output and the blue lines are R-accumulate outputs. So the downstream of this cell right here is all identical. So just this portion is different here. So at this point, for whatever reason Arc Hydro picked this upstream cells and up traced this way instead of this way, which is correct way. And I'll accumulate the right upstream cell and up traced all the way to these two longest flow paths with the same lengths. So in other words, Arc Hydro produced the wrong result. And what about performance? Y-axis is in a log scale and the blue line is Arc Hydro and the red line is R-accumulate, which means Arc Hydro's performance grew exponentially with the size of sub-watership, while the performance of R-accumulate grew linearly with the size of sub-watership. And on average, R-accumulate was almost 100 times faster than Arc Hydro, but as the watershed size grows, the gap between these two programs also grows in computational time. Arc-accumulate can calculate some other parameters, including sub-waterships flow accumulations from flow direction as input. So I wanted to compare its performance of sub-watership delineation to the performance of other similar programs, including gauge watershed from tau DEM, R-stream basins, and Arc Hydro. So Arc Hydro was the slowest and the blue line is R-stream basins. Arc Hydro was second best and then two core, double core gauge watershed was next best, and then light green single core gauge watershed was next. And R-accumulate was the best. So it shows a very promising performance of this new R-accumulate module and I'll try to incorporate this R-accumulate module into the wider mode for the next version and hopefully I can eliminate the preprocessing steps altogether using this module. So conclusions. The fewer rust operations, the better and faster. The performance of R-accumulate is linearly growing with the watershed size while the performance of Arc Hydro is growing exponentially, which is pretty bad. And the new approach is cost efficient and time efficient and can be used for interactive hydrology modeling, for example web based modeling. So I'll try to incorporate this module into the wider mode later. So I have a good plan for that integration. And you can find these references from these DOI links. Thank you very much and let me know if you have any questions.
|
The longest flow path is one of the most important geospatial parameters that is used for hydrologic analysis and modeling. However, there are not many available GIS tools that can compute this watershed parameter. At the same time, there have been almost little to no efforts in improving its computational efficiency since its first, to the presenter's best knowledge, introduction by Smith (1995) when the geospatial data resolution was relatively coarser. In this talk, the presenter introduces a new algorithm that applies Hack's law to the discovery of the longest flow path and its efficient implementation as a GRASS module called r.accumulate. He compares its performance to that of commercial ArcHydro's Longest Flow Path tool. Lastly, he introduces a proof-of-concept version of the Web-based Hydrologic Modeling System (WHydroMod) built using GRASS, PyWPS, MapServer, and OpenLayers, and discusses how r.accumulate can be used to improve the efficiency of geospatial computation for WHydroMod.
|
10.5446/52539 (DOI)
|
Hello, welcome to the presentation of how collaborative online development improves LibreOffice. My name is Jan Lachovsky and this is actually the first time I'm recording myself for a presentation. So please bear with me, it is an experiment that I have never tried before. Hopefully you will like it and you hopefully you will learn even something new here. So what is Collaboration Online? Maybe many of you know but if you don't, so it's a software solution that allows you to edit office documents online in your browser or on your mobile phone or also of course collaborate on these documents. So multiple people can actually edit the same document. The Collaboration Online is a solution that is only focused on the collaboration and on the editing of the documents. So in order to have a complete solution, as you might expect with some document storage and being able to browse the documents and choose the documents, you also need to integrate with something that provides this file sharing functionality and actually allows you to start the Collaboration Online. Collaboration Online is also a project. So Collaboration Online as a project is hosted on the web page that is written here. The development is happening on GitHub and the translations are in WebBlade, in hosted WebBlade by the WebBlade project. We have also forms for users but of course as the core of this and inside and under each and every Collaboration Online instance is LibreOffice, which is a very important part of our solution. And of course, improving LibreOffice improves Collaboration Online but luckily it's also the other way around. So improving Collaboration Online improves also LibreOffice. And I'm not talking about some generic fixes here that like we have done recently to LibreOffice for various people and for various clients but I will be talking mostly about the fixes that were really triggered by things that we needed in Collaboration Online and had to be developed in LibreOffice. So how it actually happens that like how this how this cooperation between Collaboration Online and LibreOffice works. So first of all, Collaboration Online uses Tile Rendering. Tile Rendering is a way how to present a document in some reasonable way for the purposes of the presentation on the web page. So the document is not painted here as a huge area but only as a set of 256 to 256 pixel big tiles. These tiles are handled by LibreOffice Kit. LibreOffice Kit is a thin API that is like part of LibreOffice that is that basically repers process the LibreOffice Core to serve these tiles, to serve events that are happening. For example, when some idle event happens inside LibreOffice and also of course it is necessary to react on the user input. So the input events are handled are like rooted to the LibreOffice Core so that various things can happen. The document can change according to like what the user has typed inside and these things. These events back from LibreOffice to Collaboration Online are done via callbacks so various callbacks are there for these things. Cooper Online also has JavaScript part which was initially only a viewer but then it was extended into the full feature editor. Later it became this editing became share editing but after several years we managed to do full collaborative editing. Actually the collaborative editing is reusing the multiple views feature that is present in every LibreOffice and so far like it wasn't used that much but now like it's the core of what we are doing in Collaboration Online for the purposes of several people collaborating in the same document. Over the years we had to do many changes in how these multiple views react to each other because like the multiple views are used in LibreOffice. There's always only one view that is visible at the same time but with the collaborative editing like there are many people that are actually using these views simultaneously and also and that means that lots of things had to be redone so that things are happening asynchronously here so that like the events can flow basically from any of the views and that the problem that had to be sorted out with this multiple views feature was language support because in Cooper Online like one of the things that is possible there is that like one user uses for example English and the other for example Czech language and still like the user that is using the Czech view needs to see the dialogues and the content and menus in Czech and the English user needs to see that in English and for that like we had to implement a way how to switch languages inside LibreOffice very often for example like when these views are just used at the same time like for every event from the view it may happen that it is necessary to switch the language. Of course for the Cooper Online the server part is there that is in C++ but that part is not that bound to the LibreOffice itself. It only caused the LibreOfficeKit API but for the purposes of this presentation the most important part is the JavaScript part and on the other hand the LibreOffice core itself. When we have implemented the editing we started to need to use some dialogues because lots of the functionality, lots of the settings for the documents is in the dialogues. So first thing that we have tried was actually like re-implementing the dialogues in JavaScript ourselves. We have started that with the search dialogue but even that turned out to be very time-consuming and actually not so easy because lots of polishing came into the LibreOffice dialogues over the over the years so this turned out to be a dead end. So the next thing we have tried to do conveniently and how to use more code from LibreOffice was render the existing dialogues as pixels. Of course the dialogues were not using the taltrendering, the invalidations had to go directly there but even that was a good achievement because that enabled lots of possibilities like how to expose the dialogues in collaboration online. All of this led to some improvements mostly in the area of the asynchronicity about which I talked previously but also we had to modify a few of the dialogues in various ways and add additional features into the dialogues. Then which is the most recent development was using the dialogues as they are defined in the LibreOffice but actually turned them into something that is consumable by collaboration online and that can be rendered directly via the JavaScript. It is because the sending of the dialogues as pixels was not theme-able like in the end we have found a way how to add some SVG-based themes into VCL but still it was not as beautiful as rendering directly in JavaScript and even with this, even if it was looking reasonably well, it just didn't fit the mobile because when you are on your mobile phone you do not expect to deal with dialogue boxes too much and even worse with S-complex dialogue boxes that S they can be in LibreOffice. Actually, Simon started implementing the dialogues in JavaScript that used the definition of the core dialogues and the first thing that he has implemented there was reusing the sidebar so that the structure of the sidebar is turned into JSON and that JSON is sent to the collaboration online and there it can be rendered directly via JavaScript. For the first implementation of the sidebar it was easy enough because we were using the you-know commands which are easy like you can send information like press this or call this command with this parameter for example for the colors and the core did that accordingly and it was possible this way. But of course it was not enough because the next thing we wanted to expose in the collaboration online was the JavaScript dialogues for the real dialogue. So the next thing that we targeted was the notebook bar. Here the you-know commands started not to be enough for the purposes so she started to use the UI test API which was good because now the synergy started to be seen because improvements in the JavaScript dialogues actually triggered improvements in the UI test API as well and later like she went ahead and started using welding for the dialogues which is sharing the functionality with the GTK. Also that's it for the generic thing but let's talk deeper about the improvements since the last January so improvements done in the recent year. So as I said like most of the things that explicitly improved LibreOffice by development that we needed for collaboration online were user interface improvements. So the generic approach is as I've already explained to reuse as much as possible from LibreOffice but using that means that it improves LibreOffice as well because we had to do like various fixes in the UI files for example like incrementing steps in the dialogues were improved so that so that they were more reasonable so when you click by mouse on this widget that allows you to enter value and also use button up and down for increasing or decreasing the values were updated so that they were much more reasonable before it was not that much tested like who like what is the reasonable steps for various things in the dialogues. Then of course the theming that we attempted for the pixels improved the generic VCL plugin in some ways we have converted lots of dialogues into asynchronous so that's more convenient for the event loop inside LibreOffice to deal with this way. The font work dialogue was revered to use the icon view which is again more convenient because previously there was just a big drawing area that was not really scrollable and the choices were visible there but with the icon view it is scrollable and more convenient for using and we have done also various improvements in the in the formula bar in calc and this was mostly thanks to Henry Castro and Shimon. Then I talked about the notebook bar it's like a very complex functionality so the work is still ongoing there it is it is not not the end of that because like the notebook bars are currently implemented as UI files defined by Glade and this is not really convenient for the designers like how to be able to deal with these and how to actually improve them and add functionality there or for example change the order of the items that are in there so like very recently we are starting together with Andreas Kainz to create some more convenient description of the notebook bars than the Glade files Glade UI files and hopefully it will be a very good improvement going forward but this is what is going to happen but things that have happened already so the notebook bar was welded there was a new style preview that was implemented for the needs of the of the collaboration online but because it is again a shared implementation it has improved the notebook bar implementation in LibreOffice as well there were some crash fixes regarding notebook bar that were uncovered in Collabora online and they were fixed and of course like lots of buttons were moved in various ways to align better most of that happened thanks to Shimon UI is not the only thing that is needed for Collabora online we have lots of usage of PDF in the online solution it is because like many documents come as PDF and when you just download them on the web and you want to do something with them it is more convenient or to do that directly for example in the file sharing solution that where you have the Collabora online installed and Collabora online uses the PDFM based importer in LibreOffice so in LibreOffice there are currently like two ways how to import PDF so one is based on popular and the other on PDFM the popular implementation just reads the PDF and extracts the objects from that and tries to apply as much formatting as possible and lay it out using LibreOffice which is great like if you want to modify the PDF but it doesn't give the fidelity of the PDF on the other hand the PDFM based importer tries to keep the PDF inside the document and only on each page it renders the bitmap of the actual PDF that is behind the scenes so with PDFM based importer you will get just perfectly looking looking PDF in the Collabora online but of course it needed more features and the PDFM as the library provides that so it was implemented in LibreOffice for the needs of the Collabora online particularly the PDF search so now like you can you can search the PDF, the FIAM based PDF as well you can add annotations to the PDFs which means that like you load the PDF you will add comments there and it is saved as the original PDF with additional comments that are just the PDF annotations as known from acrobat and stuff so handling of huge files was improved and also there were various fixes in the PDF export I forgot to mention that PDFM is used also for one another thing and that is like when you insert PDF as an image into LibreOffice so that uses PDFM as well for like keeping the original PDF and also for rendering the picture of the first page of the PDF and so all these improvements improve also this functionality then sidebar was improved in many ways as well so it was improved so that it is more synchronous it has better lifetime and lifecycle, very quick and instant updates of various states and very small fixes like title and subtitle, font color related fixes then clear direct formatting was added to the sidebar as well to foreground or to background commands and the various other small things that all together improve the sidebar in LibreOffice in various ways of course as I slightly touched but didn't explain too much is that for much of the functionality we are using the no commands that are present in LibreOffice and that are used for scripting a lot they are also used for the menus and toolbar definitions so many of these had to be extended slightly for the needs of the collaborate online there were some new additional set parameters for commands regarding colors then for the charts we had to implement various new commands for editing the chart from sidebar then several context menus were updated with new commands added which affects the LibreOffice as well because the context menus are routed to collaborate online as well various notifications improved various states or reporting of the states of you know commands on various places and also is the last thing but not the least one is various corner fixes and performance fixes so that you can see at least here one particularly interesting thing is this is the SVG export improvements because the SVG export is actually used by the collaborator online when running the presentations from the online impress so the SVG improvements that are needed for the presentations in collaborate online actually improve the SVG importer that is inside LibreOffice then various like document modified states because there were some cases like when you have loaded the command and even like before the user has touched it it was already it was already marked as modified and the other way around so after some action it was not marked as modified so so various various these states had to be fixed because we rely on them when doing autosave and that's it for this presentation so I hope that you liked it and as you can see the strategy is here collaborate online improves LibreOffice LibreOffice improves collaborate lots of things these things are in the UI but there are even other areas where this is very useful and of course this presentation didn't mention the interoperability improvements that were reported to us by various customers and we had to implement that in LibreOffice and and improve the the overall quality of the LibreOffice import and export filters so that's it from me thank you so much for watching and have a great day goodbye
|
Come and hear about the developments in LibreOffice that were triggered by a need in the Collabora Online. The most notable examples are the Notebookbar improvements, but there are other, like async dialogs in the core, new parameters to various .uno: commands that improve scripting capabilities, and many others.
|
10.5446/52541 (DOI)
|
Hello everyone, good evening or good afternoon wherever you are. Welcome to the lightning talk session. Thanks a little for introducing. So, I'm going to do this more live than we did before so this is some live video from my box here and we'll be live in the, in the, in the channel so you can ask questions and we can hopefully interact with that live. Right, so, without further ado, I'm talking today. I've got six lightning talks lined up, let's hope we can get this in time and without any technical difficulties. We've got Daunty, we've got Saper, we've got Swanta with two talks and we've got myself, and then Mark, I'm showing a bit more, demon a bit more about the Sunday story. So, let's start with Daunty and Star Marth ongoing work. He was asking me to present the slides and talk over them. And, but I think he's going to be around in the matrix chat for any questions. Okay. Right, so what has been done so far for LibreOffice 7.1, did some HTML, mostly MathML, color support, better parsing, better export, and some work around syntax highlighting there. Specifically, for 7.1, the all load of HTML colors and RGB colors and MathML are now supported by quite a bit of work and the parsers, and also on the export side. You see the classes that have been extended. What is possible now, it can import all CSS for colors, which goes beyond the standard, which only mandates CSS1, can import the hex syntax like this RRGBB stuff, and the simple one, the RGB single character one. And it can parse and eat color names with some fixed lookup lists down. And you can also export that back again. For import, there was some problems with entities with a slightly simplistic parser was just falling flat on its face. So that's now fixed as well. And then a different parser with a fake DTD file. It's in the wild, even if they are slightly ill formed. And lots of automated tests that keep that working going forward. Again, some quote pointers there, where that happened in the slides. So, import custom HTML and MathML. That's in the works for 7.2. So that's not yet shipped in the latest version that's on master only. So, yeah, what it does is pretty much round tripping. What's important. As I said, it's not done yet. And some tests, writing tests, they're still still open and left to do. Last but not least, syntax highlighting, it's also still in the works plan for 7.2. So, so that actually while you write in the math and code editor, not the visual editor but the code editor, you get syntax highlighting, not by notes and per token. There's still some work left to do there around highlighting arrows and operators that are more than one character. And how to get there once this is merged can be activated deactivated and the math settings. Right. So, what else is left there, quite a bunch of work items to do items beyond what was mentioned. Don't even be glad for any help now. And, but by and large to get the star math into something that is much more usable, much more accessible, much nicer to the user and more capable and feature rich. So, just as nice things like chemistry stuff, adding some latex features and markup there. Much more. So it stays tuned and thanks a lot, Dante for for giving us this update and working on that. Really quite appreciate it. So, yeah, it's some supper, who recorded his own video so without further ado, let's go over to his talk about physics based animation effects. Hi, I'm a sponsor Schubert. I might look like an outlaw, but I'm your local share. Regina Hansel, my style, your local supporters of the odf to see the branch of the TDF in the OSA OASA society. So we are working on odf features and we need your help and we'd like to provide you with help like in priority handling on a problem. So, before I start and get into the title, let's get straight right. What is a prior what is an odf feature. So, an odf feature is everything that an odf editor can add, change and delete. So, most of all, it can be everything that's being saved to an odf. This is like a paragraph and image, a table characters and for instance styles like a boat style which is dependent on some of the above. And from a developer's perspective, these features are added to LibreOffice like the product manager might say, we have now the new bold feature. And of course, they are in the source code as well, because of course it's been generated from the source code that LibreOffice has. And the features are existing in the test documents, okay, on the comment documents. And because these are the implementations, they obviously exist as well in the blueprint, the odf specification and its grammar of the XML. And because it's a blueprint, there are more than one open source, there are many like generic numeric, sorry, are also represented in the odf. And we all have a problem here because some features are coming very late to the fdc, might be our fault and we need some help. Like there's one feature style shadow that's been added as a request in 2013 and last Monday we discussed and what's the problem. Well, there's a certain extra namespace because you're not allowed to use the common odf XML namespaces XML unless it's a standard. So it's like a HANA problem here and we have double world work we you added this feature in 2013. And now when it's a standard you have to remove it and replace the prefix with a with a correct one. So can this been done better. Yes, I think so. So here's my suggestion. My idea is we are adding some priority handling for new features of odf. We are going to review the XML and review the spec wording and give like a methodology or some cooking recipe how it can be done. It has to be worked out. And we are of course the sheriff's here are three service and we are putting this on the next agenda your feature. And we're discussing and during your development. So this is one part of the next idea is because where there are different kinds of specifications we can add like the community specification is the disease specification like a local specification. This called CS. And there are these OASIS standardization specifications and ISO which are on top of it like you have to vote to become an OASIS. This will be passed by the way on last Tuesday and then later on the ice. So the idea is we are going to use the CS specification which only needs two weeks or a few to get into your feature very quickly. So you can use this new audio feature. The downside we are missing some edge working here. We need some better get up tooling so we can check the dates editor names and generate all the HTML PDF artifacts and validate them and generate the zip for ASIS. This is currently done manually and it took me about seven weeks of my spare time to get this done. I might do this once but don't like all the time. So obviously we need to come actually. And so the idea is very simple. We are going to improve the audio feature quality. Oh no that's another idea by Regina by moving this test from the specific from the from the implementations to the standard like Regina added one of three specifications. Yes. So long story short, we like to improve the handling of all the features and we like to get in touch with you and we like to improve this. For instance, I saw there are some Google some of code features for the F may we can start here would be a pleasure to work with. So help your local sheriff and I'm going to talk to you soon again. Bye bye was a pleasure. Hi, I'm back. I might still look like an outlaw but I'm still your sheriff. And today I'm going to talk about you the odf feature testing. I talked to you before about all the features so I don't waste time on that. And about the odf to see. So let's talk about features. What is an audio feature told you for so bold let's think we are doing testing of feature like bold. And I want to do it simply like I want to do the testing in LibreOffice and I'm using the odf bold test document. So I just start with a load test and loading this the LibreOffice and you might heard about something like an coverage test. Yes. So that's there was something one line code coverage test that told you like here and what is the fire name and the coverage. And you might see it's quite outdated from 2015 and below you see it's a very old version and some disc broken it's no longer active but it would be very nice to have this. And even if you look take a look at it closer there you can open a file and dive into it and you see a lot of lines and to one red line this throw this condition never been checked. So but that's not so important here and part is the high level. There's by the way in GitHub from the Linux test project coming and already some Vicky here show somebody might dive into it. And now coming back to the higher perspective we have now this bold test document and we make this test document and and we loaded with a coverage test and we get out some coverage line coverage for the bold might be a lot of it. But now we do nothing. We do another test coverage now without the bold feature. So we have two of them. And I have two assumptions. First assumption is that the line coverage of the ball test is higher than one without might be natural. And the second assumption is if we subtract from the ball test, the usual test, then we get only the lines that are related to the audio bolt features that might be other lines related to it of course. But this is a very nice to have solely the features of the lines of a feature. So what can we do with it to fix. First of all, we can create a code cognitive and that for newcomers of the load Libra office code. Second thing we can have smoke regression test and that looks like this we have database with lines relating to audio features and database with audio features to test documents and whenever get up, get realize there is a line being triggered. And was edited by a developer related to feature we are triggering the smoke test of certain documents and this can be extended like instead of loading we can all save and by doing the diff again we have only the safe related lines of this feature. And of course, we can it's a loading and saving doing some GUI test. I guess that would be very cool regarding GUI test there might be a step ahead. So let's go a little bit closer at the test. So if we have a test document we can ask yourself what features are within and this can be solved already with a PDF or a toolkit. And these are hidden links. If you took take a look at the job. There's a job you can download and then you can run a command line. So it's a specific test document or any document of choice or DT and it returns you some features and jax is pretty cool and already working. So what can we do with this. Think about it. Think we have a test document we map this ODT to the audio features, and we want to do a GUI test, and this GUI test needs some GUI conflict. And now we have some method that is missing here actually that we need some mapping to the audio features to the GUI conflict. And if we have would have this we can throw a test document and would be recreated by GUI in LibreOffice and be saved back that we very, very cool. So the only feature some kind of denominator between various things look at it we it's between the specification the test documents and and the feature the features in the LibreOffice and of course now in the source code as well. And I'm going to focus now we the sharers to put the feature view on the specification and the Xamarin because currently Xamarin says oh this is valid or non valid but doesn't tell you where is this both feature missing or where is this feature of changing the page orientation on a paragraph or a table where it can be and what XMS involved so we need a different view and we sharers are going to add it and we need your help. Again it was a pleasure to talk to you talk to you soon. I wish you all the best. Hello everyone. I'm Sarper Akdemir. I study in Istanbul Technical University and I'm a I'm a basically a wannabe hacker. I'm going to be talking about physics based animation effects in impress which was the Google Summer of God project I worked on last summer. The project basically introduced animation effects that use the physics engine box to the was the physics engine used to make this possible. The overall aim was to create some nice new eye candy and introduced this new type of animation effect that interact with tears for earnings by bouncing and sliding off of other shapes in the current slide. Now that liberal 57.1 is out you can now play around with the new for animation effects in impress three of the animation effects are emphasis type and one is exit type here on the left. You can see false from later than action quite boring in isolation falls down and stops the shape wherever the animation effect ends. The second one is should write and return. This one also has a variant that shoots off to the opposite side. It is definitely my favorite order of the physics based animation effects. And the last one is fall and fade out, which has the name suggests falls and fades out. This one is too pretty boring just by itself. What makes these animation effects interesting is how they interact with other animations and shapes in the slide on upcoming slides. I'll keep demonstrating animation effects in action to showcase and comment on the current capabilities of them. Every physics based animation effect has a set bouncing is density and starting velocity parameter. For instance, the shoot write and return animation effect has a bouncing is off point six has the default density, which is one and a starting velocity that has a magnitude of 10,000. These parameters are set in the content that XML we will come back to that later. The current implementation supports to or more physics based animation effects going in parallel. Maybe with different rations, maybe the likes to make them start later, etc. The shapes and ongoing animations can collide and bounce off of each other. Although there are some edge cases that don't work as expected like shapes that intersect themselves, most of the shapes that are available in impress work without a problem. Physics based animation effects are able to handle and simulate collisions with ordinary animation effects that are going in parallel with the physics animation. Here the three rectangles spin and in the box city world they have an angular velocity corresponding to their spin. Likewise, when the ball moves around in a motion path, it has a linear velocity assigned to its physics body so that it simulates the moving shape as if it had momentum and interacts with the square in a convincing way. Also stuff like shapes appearing and disappearing from the slide are handled to. It is possible to group shapes up and make them make this a single physics body. Here the red shapes are grouped together as one and a shoot right and return is applied to them. The sun has momentum and that drags the ball with it. I do believe all around these capabilities make physics based animation effects a powerful and engaging storytelling device. Well, you might think that you need a valid argument to convince someone into changing their opinion on something. Not so much when your idea can punch their idea directly into the trash can. Now that that's out of the way, I'd like to talk about actual implementation of the project and how the code functions. When a physics based animation effect starts a box city world is created with it and the current state of the slide gets constructed in the respective box city world. According to this documentation box to the requires the moving objects to be between point one and 10 meters to physically simulate them most accurately. To achieve this a skill factor is used to map between liberal fist and box 2d. The skill factor essentially maps the width or height of the slide 200 meters in box 2d. While box 2d can handle convex polygons without a problem, it can handle concave polygons. To work around this fact, all shapes in the current slide are first triangulated. The resulting set of triangles are attached to a single physics body to create the desired physics body. To illustrate the smile on the left is represented by the collection of the triangles on the right. Likewise, a set of quadrilateral are used to represent shapes that are not filled. The freeform line on the left is represented like the quadrilateral collection on the right. Each of those color segments is a quadrilateral. Also, when there are parallel animation effects that aren't physics based ones, they report back their respective position, rotation and visibility. The information gathered from them used so their updates can be physically simulated in the box 2d world. For instance, when a motion pet animation changes the position of a shape between two frames, a linear velocity is calculated and applied to the shape's physics body so that a convincing interaction happens with a physics simulated shape if they collide. Lastly, I will talk about how to again animation effect yourself. So, when you create an impressed presentation, the animation is living content.xml, which you can easily access by unsipping the presentation's ODP file. They are defined using Smile Hierarchies, which stands for Synchronized Multimedia Integration Language. Smile is a markup language to describe multimedia presentations. So, if we take a look at an example from content.xml, this is how the animation route of a single slide looks like. It looks pretty crowded right now, but the section we need to pay attention is much more smaller and right in the middle. If we focus on that part, this is where we can alter the animation effects. Between the NM column part x, which stands for Parallel Animation Note, there is an animated physics tag. Physics-based animation effects start with this tag. After the animated physics tag, you need to specify the duration, target element and field. And when you are done with them, you can customize the parameters special to the physics-based animation effects by setting density, velocity x and velocity y, which when combined give the starting velocity vector of the shape, and to finish off bones to set how much energy loss will occur on collisions. By playing with these values, you can customize how the physics animation will take place. But this is not the place you need to stop. You can combine different animation tags to create combined animation effects. For instance, if you take a look at the structure of Shoot Write and Return, it starts with an animated physics tag. Since the duration is set to 4, it gets completed at 4 seconds. Realize, other 3-animate tags have a begin parameter set to 4 seconds, which corresponds to 4 seconds delay in the normal animation workflow in Impress. Therefore, just as the animated physics gets completed, 3-animate tags after that activate, and they move the shape to its origin starting position and rotation. So by taking this idea of matching different animate tags together, you can get creative and create your own animation effects. Most of the animation effects in Impress are created in this way. So if you'd like to see more examples of these, you can check out fx.xml file in master where all the animation effects process lie. And please do contact me if you have some questions about creating custom animation effects. And if you do create them, I'd appreciate it if you would share them with me. Thanks a lot for listening to my lighting talk. You can contact with me at these links. That's about it. See you later. Hello there. Are there any questions? Hey there. I just read a question that says, in vectorial field in DIPO, as for example gravity, you can't configure those, but it already actually has gravity, a default gravity applied. Are there any other questions? So that's one, the most upvoted one is just a comment from Simon called to action. So anybody who's in contact and might be able to support ODF specification work, please do get in contact with either Swansea me or Simon. There might be a need ongoing to sponsor or yes standardization work. All right, I guess there are no more questions for me. I just say a little disclaimer about the presentation. The frame rates is actually not that low value use presentations yourself. It's just streaming. Small disclaimer. Yeah. Yeah, it's it's really cool thing to say love that thing. That's great stuff to to spice up. I mean, that's, that's always what what what you, but for people use those animations for to spruce up their presentations and it's just great, great stuff that is going to be a few more of these. So we had announced a demo about the Web of 70 stuff. I just mentioned it in the, in the deaf room chat that we had some issues with browsers crashing so we will just do that separately I didn't want to risk browser going down in the middle of a stream live streaming here so we're just going to record that and upload that on YouTube or peer tube. So I'm sorry for that was good. Stressful days and what I do prep. Thanks to all the, all the people making this death room schedule, making this whole thing a success. I tremendously enjoyed it thanks to all the lightning talk speakers. And I'm really happy to have a lot of people who are really passionate about past spring and who really also some of them worked to the last minute. Much appreciated. So, yes, I don't think there's any, but we can keep chatting for for some, for some half an hour, if you would like. I wonder if you can open the room, the backstage room for for the general public. We can have some nice get together here, perhaps even So there's one more question. Coming up. I probably missed There's no way to get trained and using liberal office where I live. This a question. And I suppose, pretty much, regardless wherever you live, there is a way to get trained these days, I presume, probably rather remotely than on site but we have certified liberal office, an ecosystem of certified liberal professionals, training professionals. You can find that if you go on the website and check for professional help and you will find a list, and you might actually find somebody close to you. So, if you want to answer the question, if that was more regarding developing liberal office, then your best bet is probably just joining IRC joining the developer community. And in theory, well, that I have been doing developer training and other people have been doing that as well but I think that's something that you best get the best value by just come coming to where the developers are. Okay, so there was an amendment to that question regarding Belgium. And I'm, I can just check if we have anybody. So we got at least one certified trainer and migrator in the Netherlands. But what the close enough. I'm sure we got people in France so that should cover the other, the other location, the other language in Belgium. So the question will liberal office in the future be able to create interactive PDF form so we can get rid of the pasty pasty XFA based PDFs, which are still used in the millions. Yes, I hate. I'm not telling you who's using that should not be using it because they are kind of weather to to proper standards. So I mean, I thought XFA would be like even even deprecated by Adobe and faced out so I wonder, it's one time maybe you, you happen to know what's the, what's the successor or what people should be using instead. No, I don't have an idea what to use instead. Repeat the question. So it's about PDF XFA based forms. XFA. Okay, yeah, I heard. Yes, I remember these, but I don't know what to use instead. Sorry, no. Maybe don't use XFA. So maybe don't use PDF if you need anything interactive. Maybe that's the take home message just use a web form. So to yes. I find it super irritating I was recently filling forms. And I had to. Yes, well I had to use proprietary software for that and for no good reasons I mean it was just putting my name in an address though what's the point. I suspect lots of that is just cargo coating things around and maybe just the lobbying those people still using those forms might be the better approach than trying to add that 10 years after Adobe deprecated it to the office. Yeah. Yeah, so the comment or the answer to that so that there's this this built in delay so whatever I'm saying here is going to broadcast a bit of delay. Those PDFs are created by non programming people so we're offers would need to generate those web forms. Yeah, I wonder if liberal offices will be the right tool to really to generate forms and in this day and age. Since what the concept of a form and liberal office or a form and PDF videos is that of of paper based workflows so you have a sheet of paper and you have, I don't know, lines there and boxes to tick and that's also the concept and both, both PDF and and and liberal office work with the behind that is the, the paper based metaphors. And I wonder if that is still appropriate. If you do, if you touch something maybe you should just put it straight into some, some online web form. I'm pretty sure liberal office is not the right, not not the tool of choice. If it's about migration. Let's say migrating something existing in an accurate to something that's on the web. I just wonder whether liberal offices should be in that in that business. Clearly there's billions of documents that the liberal offices supposed to be able to read and convert and get into other formats. And I wonder if that army knife approach with, like, sometimes 30 more than 30 year old formats, long abandoned elsewhere that you've always can read that that's the case. But but that's that's, that's always danger feature creep I mean you can't do everything, at least not not And I wonder if that that XFA is that sort of thing that might be out of scope for liberal office. The X forms is that's that's actually the form or the, let's say smart form concept of liberal office using the outside is it's there. It's also not from what I can say through real to export that to an HTML page but I would be, let's say more through the aesthetic about that because for some PDF or assuming to get the import right and you represent that past that and get some some export going on it sounds like not probably not the hill I'd like to die on. And at the end of the day, so liberal offices as an open source project it's that there's no bad, well that there are barriers to entry but only in the sense that you, it's rather complex. But beyond that, that there's nothing that we couldn't integrate so if there's somebody sending us patches, or saying well I have, I have some money in the bank I'd like to sponsor that I don't think that's proceed something that we wouldn't do. But in this case, I think like just from a product and money for value. I wonder whether that's, that's whether that's well invested. So for PDF processing, I mean there's, there's, there's open source software out there, besides and beyond the office that that might be a better target for getting PDF input and converting that to something else. And I would comment that obviously people tend to keep using what what they're what they use to use so they, they have an axle stretch application and they do that for doing database stuff, or a spreadsheet application doing database stuff, just because they're not doing it. But I wonder if that's, it's a very, it's a very local thing to do whether you can try convincing people if they, they need to take money and do something, whether you can convince them to put that to good use or whether they want to continue with that habits. And that's very, very visual up to you. And again, that there's perceived nothing that that, let's say there will be something that that liberal office is is that there will be off limits for liberal office. If it's well done, if it fits the architecture. It's a really, really, really good kind of addition, I think is welcome. Okay, so I would say, given that there's this lack here for the end it's a bit one sided. So the, the session here was great to be around was a lovely day. Thanks especially also to the foster organizers for pulling this off. And also to the students and kudos that some quite a massive thing to get this going with that many parallel tracks and that many people watching and work flawlessly here. So thanks for that. Everybody else have a good time. And we will be around in the main deaf room chat to continue this discussion. But I'd say for for here. Let's, let's close that session some 15 minutes early. Unless one day you have some last words. Yes, I just want to thank you toss and organizing and thank you for your patience with my, my videos. It's was I can't get out my habits. The other side is that it worked for you. Well, it didn't quite work for me. Yeah, I'm sorry. No, no, no harm done. Sorry for getting better for sticking a video to the end was me noticing that the VLCs. I think it's playing it by alphabetical order not by order when it wasn't a problem. The presentation was actually really good. I think that's mine. Same here. I was just surprised. Good stuff. Okay, so with that, goodbye everyone. See you hopefully in person next year. Hopefully. Thank you. Thank you. Thank you.
|
LibreOffice: Interesting Talks from Community Members
|
10.5446/52545 (DOI)
|
Well, are you sitting comfortably? If so, I shall begin. Today I want to talk about Next Cloud Hub and making it cooler if such a thing is possible by integrating it with Collaborot Online out of the box. So what does that mean? Well, I'll do the obvious stuff first. So Collaborot Online is built on the awesome LibreOffice technology. It's just cool. It's rich. It allows whizzy work editing. It's interoperable. You can collaborate with people. It's like your very own Google Docs on your very own server and not sharing your data with anyone. So privacy and all of that good stuff. And of course that's extremely complimentary to Next Cloud, which is also protecting your data and making sure it's secure and shared sensibly with people. But of course Next Cloud brings a whole lot of other functionality there, canvaring and email and various other cool things. Really an amazing PHP application. And some of these things are installed out of the box as you go along. So it's very, very easy to get them. But there's one problem. Initially we had no Collaborot Online goodness there, which was, well, that was not good. Obviously not good. Why is that? Well, the rest of my talk explains it. But let me just give you the basic outline. So there's really a life cycle mismatch here. So if you consider how Next Cloud works, it's probably quite familiar if you've ever done some PHP programming. But essentially the browser JavaScript or just web pages makes a request. And every time they make a request, they make a new connection-ish. We'll look at that later. And that comes into your engine X, say, web server and your PHP backend. And as that request comes in, we spin up a PHP process, or we have one ready and waiting to serve you. And that PHP thing can do a whole load of stuff. It can talk to your database. It can talk to your object, store. It can get things. And some seconds later, that PHP process has to return its data and be killed or wiped and recycled. And there's a limited number of PHP workers there. But it really is that flow that as the request comes in, PHP comes, it does some clever logic based on, you know, what's in the database, what's in the object store. And it returns the answers and it goes away. And that's, well, that's how all these things work. And there are lots of advantages to that. Obviously then your work that you're doing there can be scaled. The state is all in either the database, the object store. It's centralized somewhere nicely. And so there are a number of advantages to that. Well, Caliber on Warline works in a different way. So we scale in a slightly different model. And we essentially, because our workloads are very embarrassingly parallel, we can partition them all into different machines. And so you essentially have an office suite in a box or in a kind of a mini container. And that's kind of cool. But to do that, we have a persistent WebSocket that then talks to that instance. And so that instance really needs to stay around for the duration of the browser session. And WebSocket allows you to do that. It's a very long lived HTTP connection that's upgraded to have this very low latency protocol over it. Well, of course, when we actually store our state, we then, you know, we push that across to next cloud and we auto save the state. And that goes, of course, ultimately into the database and object store at the back end. And so this is kind of dispensable, but you really need it there while you're editing. And so, of course, because of how PHP works here, there's obviously no WebSocket support by design. And in part, it's worse than that. You can't actually implement it because just the way the whole stack goes together, it's kind of a half-duplex one-shot thing. You know, you get a whole load of data that comes from the browser into the web server, it passed onto a PHP process, and then the PHP does its work and it goes... and it gives it back to the web server and that then comes back to the browser. And so it needs to regenerate all that state to answer a request from scratch on each request. And that's in itself quite interesting. I mean, so, you know, there are a number of database queries that need to be done for every request, you know, some handful of them, to authenticate you, make sure you're, you know, you should be allowed in and start building the context necessary to get your data. Now, Canabra Online, of course, has, you know, this requirement of a persistent connection to the browser. And we can't really do all of this work per request. So, for example, we can't load the document, you know, which is some time plus the document fetch, you know, apply your keystroke and then save the document again for each keystroke. This would just take far too long and be far too resource-intensive. So there is...anyway, there's the kind of mismatch. So we've done lots of work over the years to try and get...or over the last year to try and get our goodness to people, to more people. So one of the things was to have a quick tryout server. So we have these built-in demo servers. And that's fine. And, you know, Canabra provides some of our partners, Frye Program, Cintiqa TIT in Sweden, generously provide one of these. And...but you have to understand that before you use that, a demo server, you know, in order to render your documents, it's going to send your documents to that, you know, for a server. So, of course, we're not going to show you a document with anyone. We might debug a problem in it and fix it for you. But we're not going to publish those. But still, you know, we really want to discourage this idea that, you know, people are sending their data away to third parties. So it's really good for a quick test and a quick tryout. But it's worse than that because actually it's only going to work if you can actually connect to it remotely. You're not going to be able to get that data from your web server via our WAPI-like API. And so it's going to be public and it's going to be routable on the Internet. And lots of people, you know, can't do that. So, you know, if they have a local setup, a local host, or they're using HTTP, it's not secure. Then we have problems. And, of course, we made that easier with, you know, this quick tryout, Docker images and so on. But can we do better? Could we do better and make things easier for our users? That was our, the mission. So after experimenting, it seemed that we could run something in the background, which is great with this magic, you know, run, run your program in the background and disown it. That would allow us to leave something that survived long after the web server had killed a PHP worker. And that's brilliant. So, of course, the only problem then is that you need actually something to run. So we needed to then build an app image to talk about that in a bit. The ability to download that, and that's not as small as it could be. And you recall that these PHP things and web servers loved to time out after a certain while and destroy stuff. So, you know, we still need to do work to shrink the size of that. It's like 150 meg or something. Of course, we need to be careful, manage PIDs and restart and so on. But either way, there's a thing there, rich documents coding in the Collaboration Line repository that has the bits that help make this work. Now, in theory, running this thing in the background solves a lifecycle problem. That's great. So we could run it and we could open a port and the browser could connect directly to this thing in the background. So, you know, we've got that one click install that runs and all is well. Well, yes. So the problem with that is that your clients typically, you know, they can only really connect to 4443 or HTTPS. Because everyone is locking down all the other ports. And in order to do that, they really need certificates that validate that are accepted by the browser. So, you know, our service has to be trusted. We need to know where those certificates are. We need to know its host name is publicly rootable. Then, of course, our server, you know, often servers tend to block these and they already have their port 4443 is typically already in use by the next cloud but installed us. So that's kind of not a free resource that we can trample on. And then, of course, there's also complexity around the configuration. SSL unwrap and offloading often. Although it looks like an HTTP server, it's seen to the world as an HTTPS server and so on and so on. There's just a ton of complexity and we're trying to make this easy. Like, really, one click install that just works. So we came up with a solution here which is pretty simple. To drop a PHP file into the live working next cloud that should then just work like any other PHP plugin. And that proxy would then connect locally to our cool image that's running in the background. And that's kind of cool. It avoids all of the web topology complexity. We just pass the data to and fro. Everything's wonderful. Well, not quite. So the PHP proxy prototype was three milliseconds a request. Pretty good, proxy. Adding all the infrastructure we needed to make it work in next cloud. The plug-in stuff, their pretty error messages and so on. All sorts of good, useful stuff. 110 milliseconds per request. Much too slow. And of course, it's possible to configure your server so that that executes much more quickly. But no one does that. So that's really not going to help out of the box. So thankfully, next cloud kindly added a hole for our mini fast 350 line proxy, which is just great. Then this will just pass the data to and fro. Well, we want the raw data basically. We want to just get the data off the socket the web server gave us and pass it straight on for minimum impedance mismatch. Unfortunately, it's just not possible. So, you know, we don't get any headers. We get unwanted escaping. There's all sorts of stuff we have to effectively rebuild the headers. We have to rebuild the content in memory back into a raw data stream in PHP because, you know, Apache and PHP between them all. So, you know, we have to do manual header parsing to try and look like we're not there. So actually, we have to do a whole load of work to look as if we're not doing any work there, which is kind of sucky. A minor advantage, of course, is we can inject our own header reasonably easily. There's really no async I.O. support in PHP that's at all helpful here. I mean, why would you bother with that? Basically, you're going to do blocking reads and blocking writes. So, you know, why bother even including it? We'd love to use Unix domain sockets to speed up the local connection there, but, you know, and avoid some checks. But, well, we can do that, but it requires a PHP module that most people don't have. So again, simplify that. Then, of course, you're going to build an app image. The app image has a number of constraints. So, of course, it's going to run unprivileged or run as your web, worldwide web run user or this kind of thing. And so a number of our checks to ensure that it is run by the right user, that it can create CH routes and encapsulate and hide all sorts of things. You know, these just can't work, really. So it's a great shame, but, well, there it is. So in order to make it easier for home users to install this, we disable those. We needed to work out what system libraries to bundle, how to get the data in there, how to, you know, get fonts and dictionaries and so on, and then build it. And, of course, we only build it for 64-bit platforms. My hope is that 32-bit platforms are dying rapidly. And then there was a whole load of work. And, you know, Ken, Andy and Andrew Ash and maybe Murt Ash did to get lifecycle rights. So if you're leaving something running in the background, it's kind of nice if when you upgrade, you get rid of that one and you run the new one and this kind of thing. So there's a whole load of things there for version information to be able to upgrade that and make it work nicely. So then inside the actual cool code, we needed to do a whole load of work too. So all of our incoming data was either on an SSL or a non-SSL socket, going straight into a WebSocket handler, which processed the WebSocket protocol. And that was the base for, well, anything that talked to the outside world. So we had to abstract that away because we are going to make our own protocol for this proxy protocol thing. And so we created this protocol handler interface. We then have, you know, a message handler interface and we have the WebSocket handler and the proxy protocol handler. And then the whole chunk of lifecycle rework to make this add up. And we came up with a horrible proxy protocol like this. So we encode a whole load of stuff into the URL, so the document pieces, WebSocket session ID, so they, you know, we know who it is. Of course, we track that too, but it's helpful for debugging to see it, you know, coming through. So we verify that. The command, you know, are we opening it? Are we writing stuff? We previously had some clever ideas like wait that didn't work. And are we closing the session? Are we finishing with that? And then a serial number to make sure these occur in order because we're doing a whole load of asynchronous stuff. WebSockets have a guaranteed order. We need to reproduce that. So then the JavaScript needs updating. So we then had to write a new proxy socket alternative in JavaScript. So this then, you know, will parse the new protocol, queue events, emit them, hopefully queue them up and dispatch them in chunks. That then has to throttle input and output. You know, it's important not to be sending too many too quickly. So we have some delay to build up requests and then send them at certain times. And there's a degree of complexity there that's quite fun. And we went through several generations of building that out. An innumerable URI-related protocol. I say the JavaScript, you know, kind of assumes a lot of things about URIs. We had to pull all of those into JavaScript helpers that would get, you know, the proxy prefix tweaks your URI quite significantly. Because you look like an X-Cloud plug-in now instead of a collaboration line. So there was a lot of work there. And when it came to the CSS, there's a whole load of image links in the CSS. And we just couldn't, there's nothing we can do. We had to rewrite those. There's a JavaScript walker that walks your CSS, looks with bad URLs and rewrites them very early in the startup process. Which is not elegant, but at least you see your toolbar. So you know, there's some pluses there. And the JavaScript then uses XMLHTP request, exactly like this, pretty much. And you think, how can it possibly work? You know, like a TLS handshake takes this long, like a long time. It has two round trips before you even get any data sent. So how can this possibly work? Well, good question. Performance. Performance is king. You know, people want something silky smooth to be able to edit with. And it turns out the persistent connections have rescued us here. So although in theory under the hood you need to create a new connection every time, the browsers are clever. Well, at least modern browsers are clever. And they use this keep alive thing. So this is a magic header and it doesn't tie up a PHP worker, but it ties up the web server and keeps that socket alive. Keeps that authenticated TLS connection alive. So you can very quickly load new pages, browse on the same site, grab images, scripts. You know, you can do lots of requests down many fewer sockets. And so here perhaps you can see what's going on. There are four of those. We actually limit to four. And those are then happening in parallel very rapidly to start with. There's everything setting up and we're loading and getting document rendered. And then things calm down. And then occasionally, so actually we can just use one. But occasionally we need to use another one. But it works really quite nicely like that. And if we're not typing, what we do, if we're not getting interesting events when we poll, we simply assume that there aren't going to be any more events when we next poll. So let's poll less frequently. Let's exponentially back that off. 500 milliseconds or so, couple every second. And you can see here there's a sort of slow closure of these kept alive connections as they kind of rotate them. So each one of these is a separate connection there over time, you know, as time progresses. And that works really well. And the performance is surprisingly good. So if we look at what a normal ping would be, across EU, you know, a very bad ping, something like 50 milliseconds. So 25 out, 25 back. And, you know, if you're going out, you're sending a render request. We do the render and then it comes back and then we, you know, we update our view locally in JavaScript. Pardon me. So this is the normal WebSocket approach. For a notify, of course, we don't need to connect out. So we just, we have an event and we send it and we render something like that. So latency is very low. When we're using the proxy, we have a different problem. So we send this, this event out, we ask something happen, the render then occurs after we've, you know, we've got it processed. But the problem is, you know, how are we going to get it sent back? Well, actually it turns out that if you poke every 25 milliseconds or whatever you think is like a quarter of the latency there, then you can actually be ready, not that far after the render's finished, to send the thing back. So we only really insert this little, little gap here. So it takes our latency from maybe 70 to maybe 85 milliseconds. So not, really not terrible. And I know there are probably some clever people in my audience who think, ah, long polling, long polling is the solution. And we can save, and indeed we can. And this was implemented and we got down to maybe just a 5 millisecond increase on the latency, which was awesome. Of course, you need to continually recycle your long polling sockets as you get data. The problem with this is that it worked really well for two users, but the third user would just wedge the whole system because your web server really only has about, well, by default, say 10 PHP worker threads at any one time. So if you're tying them up, you know, it just, it just doesn't work. It's mandatory to come in and out fast in order to be able to scale. So finally, after all that work, there we are. We have single click install apps from the App Store. Wonderful, easy to use. And please upgrade. I mean, like, it's a brilliant that it works. It's really easy to use. You can get the taste for the goodness and then you can install this properly yourself. Get a better performance. This is really a huge job. It is not easy to make things easy for our users. It's just months of hard work. I recall talking to people go, oh, you know, there's some branding problem, or why aren't you in next cloud hub? It's really easy. Just put it in, right? Well, not quite a month of hard work and an investment from collaborate to make this better for our users. Easier from everyone to use. So thanks to all those engineers, a collaborator and next cloud who've helped make this make this a reality. That's much appreciated. And that's pretty much it. I think thank you for listening. I don't know when I get cut off, so I'm going to leave a few minutes of questions at the end. I think that's about it. I look forward to hearing your question shortly and maybe even showing you a live demo.
|
Making Collabora Online and it's LibreOffice technology as simple as possible to consume for users with limited time or technical skill is vital. Hear how we bundled COOL as an AppImage, plugged it into PHP, and implemented a websocket proxy-protocol to make that happen. Collabora Online uses websockets to bring the LibreOffice core's rendering to users' browsers and mobiles. This gives us a smooth low latency editing experience. This however requires a persistent server process, something inimical to the PHP processes we integrate with. Hear how we overcame this limitation, to get an AppImage that can bootstrap, and a protocol that re-uses the HTTP keep-alive to rather successfully emulate a polling websocket based on PHP. Hear about some of the pit-falls we fell into, and the 'obvious' ideas to make things better that didn't pan out.
|
10.5446/52546 (DOI)
|
Hello everyone. This is a talk about handling digital signatures with PTFEma and LibroFace. I'm Mikhail Shvainov from Hungary. I'm with LibroFace for a long time. First as the player of Susan, I'm a wait-collabora. So how can we handle digital signatures with PTFEma and LibroFace? Well, let's step back a bit and think about what different scenarios we have when it comes to digital signing in LibroFace. It's actually a huge matrix when you refer to this digital document signing. It can mean a lot of different things. It may need creating a signature or signing or reading a signature, signature verification. It can also mean a signature which is part of the document as a visible signature or it can be an invisible signature which is just added on top of an existing document as a kind of metadata. It may also mean that you're signing something which is read-only a PDF or you might want to sign editable formats such as open document format or office openXMAR. It may also mean different actual crypto code which might depend on platform. So you can use Anasas from Mozilla. This is what we use on Linux and Mac OS. You can also use the Cryptography New Generation API from Microsoft. This is what we use on Windows. You can have also different certificate types. We can use this X509 type certificates which is typically used for smart cards and other kind of serious certificates like something that's issued by governments for their citizens or you might be more into this peer-reviewed network of trusted certificates which is the working rate with GPG. You can have different encryption algorithms. You can use this newer, more fancy ECDSI based encryption or you can use the older RSI based encryption. You can have also different hashing algorithms. You can use the older Shell One or the more up-to-date, more fancy, more secure Shell 256 hashing algorithm. And of course there are other legacy and other modern ones. Point is that there is not a single hashing algorithm. And sometimes done in bug report just referred to visuals, the assigning is not working. But then they mean some saying which is one combination but you can see we have several combinations here. For the purpose of this talk I will be focusing on signature verification because that's much more interesting. Signature is created once and then verified several times. It also happens on opening documents. It happens implicitly so the kind of attack vector is much more interesting. It makes more sense to have some more battle tested code there. I will talk about primarily PDF but briefly mention other editable formats as well. The actual crypto code is not that interesting for us. So what I will present here with PDF is working with both NSS and CNG. And I will focus on X509 certificates mostly because PDF supports only that one. For the actual encryption and hashing algorithm we will see that the code responsible for that did not really change by moving to PDF here so that's not that interesting. So you can initially back in the open office all times we only had signatures only for ODF and then later this got extended for office open axiomal and PDF. Both office open axiomal and ODF is physically zipped axiomal. So we use a W3C spec for how to sign axiomal files and basically that's it. PDF is rather focusing on basically doing a binary signature like take chunks of the binary PDF file, sign them digitally and then put that signature binary signature to a preallocated place order. You can see here that actually the extension on top of the W3C specification the Microsoft Office extensions are kind of horrible. They leak all sorts of your software and hardware data. So what we do on the LibroFace side is we just hardcode something as a placeholder because something has to be there but we don't really want to leak your details. As mentioned you can have different actual crypto libraries as a back-hands of LibroFace. We have two backhands the Anasas one and the Windows one bringing Anasas up to date compared to what we inherited from open office org was not that interesting. For Windows it was a bit more interesting because originally there was only crypto API which is supported even on Windows XP back then that was interesting but then the encryption and hashing part of crypto API is kind of deprecated and it does not work with ECBSA. So we had to go to the Librexamal.sac library that we also use in LibroFace and replace their backhand to not use crypto API for encryption and hashing but rather use the CNG one and then it works nicely with ECBSA keys as well. As mentioned you can have GPG certificates or X509 ones. The GPG ones are basically for ODF and that was an effort done by CIB and allotropia. So that's their baby that's not really inside the scope of this talk. For PDF and PDF-UM the only choice is X509 so I will focus on that. When you go to a file and digital signatures on the desktop LibreFace then we actually show you the signature type even if just a plain PDF1 or it's actually some extension of that like PADAS or for XMAC signatures it can be XADAS and you can also see the exact certificate and there you can see that this is X509 certificate. Now when it comes to the underlying encryption as mentioned the BigSign gets that the X509 is also working with these various smart cards like different countries like to issue electronic IDs for citizens and like my personal one is containing an X509 certificate which is actually trusted by the government and then with moving from QPRA API to CNG it was possible to use this inside the ProFace for digital signing. It's a bit similar with this modernization effort with the hashing algorithms that just moving to some newer XMAC library gave us modern hashing algorithms. The pain point there was that we inherited some huge patch from OpenOffice work times and then it was necessary to find out like which pieces of that are obsolete and which still makes sense and then upstream that but nowadays we want a lot modern enough XMAC library and this just works out out of the box. So then we arrived to the actual PDF signature verification which is the central point of this talk. So what we had is you first you take a PDF file you want to tokenize it and then extract the necessary information from the token stream so that you can make a decision if this signature is actually valid or not and we had like three PDF decolonizers already in the code base like optimal one would be just one but we already had three but all of them had different downsides so popular had the primary problem is that it had to be an out-of-process tokenizer so that's a bit painful to interpret. PDF was already there but it lagged on signature API and we also had some own boosts period based template magi ect tokenizer which is I believe used for hybrid PDFs when you embed an editable PDF inside PDF file but that's very hard to modify and maintain unless you are kind of living inside this boosts period library so we decided to not develop that for them. So instead what I did is a very simple very focused tokenizer called vcf.io our PDF document which closely tracks the source location of each and every token coming from the PDF file and that means that it's very easy to add incremental updates at the end of the documents as you are creating your signature and given that tracking was the source of various tokens from the PDF file is also useful for PDF images this is no reuse for the PDF image purpose so that in case you are inserting an image to an editable document and you are then exporting back to PDF that will be reusing the PDF image as is without any pixel or entry. So in case you want to verify a PDF signature with PDF.io originally there was basically no support for this because PDF provides a high level C API and the C API had no concept of signatures so a new signature header was created for a set of APIs and you can list how many signature objects you have and for each signature object you can have the content of the signature this PKCS7 blobe which is what you want to hand over to the crypto library then you can access what are the byte ranges like offset and size pairs of assigned data you can also have a sub filter which is describing how to interpret the content and there are also other metadata like the reason of the signature you also call it the command of the signature you can have a timestamp which is outside the content of the signature and stuff like this. So on the LibroFist side really the motivation is that in case we open PDF-wise and then we want to decide if there are signatures then and validate them right after opening without any user interaction so it makes sense to have something battle tested and something produced by some external library which has more resources to produce some nice parts or there so using PDF-EF-M DR is great and the idea is that PDF-EF-M does not really want to depend on any crypto library and we want to keep our Anacess and CNG crypto usage as well so that's a nice win-win we can ask PDF-EF-M to provide just enough details so that we can or enough information so that we can verify the signature and then we can keep our existing certificate verification code and also we can do the we can basically keep our existing verification code but we can drop all the PDF parsing pieces which is expected to be the more or less weaker ones. One important point here is that this also means that we don't have to get into this tricky decision what certificates we want to trust or not trust and as a maintenance list of trusted root CAs this decision is done by Mozilla Microsoft does the same from CNG we can delegate this decision nicely to them and then is their problem not ours. This means that I started to collect the various problematic PDF signatures and like for example there are many special cases which were not supported by the OLDOwn tokenizer and are supported by PDF-EF-M for example you can have some some custom non-command magic number or magic string between the header and the first PDF object PDF-EF-M handles this nicely also this is a screenshot from some external PDF validator this DSS validator produced by the EU which shows that actually our PDF signature as we previously said is something that's accepted by that validator. The question is like if this is working nicely and users can enjoy it then how is this implemented perhaps you are interested in some more technical details. So the PDF-EF-M side PDF-EF-M internally is written in C++ and then there is a C API on top of that and the internal C++ API is unstable but the C API is stable. So for the majority of these APIs it's just a simple wrapper around the C++ core it's not that problematic to add these the full list of wrappers is like 200 lines of code something like that what was especially tricky is how to detect incremental updates and that's tricky because normally in case you are a PDF viewer which is the primary use case for PDF-EF-M you go to the end of the file you find the last trailer and that's kind of a table of contents and then that has pointers to previous bytes of sorts of the file for previous objects and it always points to the latest version of that object. So you can quickly render the last version of that file and in case there are multiple trailers then we don't even parse the previous trailers and we don't even see the previous versions of those objects. This is working against the idea of PDF verification where we want to make sure that no further modifications were done to the file compared to when it was signed. So that's a conflicting requirement and that means that it was necessary to tweak the PDF-EF-M side so that it's possible to visit the document from start to end and find all these trailers. The PDF-EF-M side is all unit tasks no integration tasks because they are not depending on some crypto library and then basically for each and every API there is some Google tasks and also there is some documentation on how exactly the different parameters and return codes are behaving. Now on the LibreOffice side what was necessary is to kind of separate all these VCR PDF document usage to a single translation unit because then you can review how exactly that's used and when I did the PDF-EF-M side API then I tried to keep that very close to what PDF document was already providing so that switching to that one was possible in a single document. And for the task tank I just had one one of these tricky documents which were not possible to parse previously so now there is no task case locking down that even if we would switch away from PDF-EF-M in the future then we don't lose this ability for this more advanced handling of these corner cases. This also means that PDF-EF-M is now able to provide you all this information regarding signatures or these tricky cases like what happens in case you have a signature and then some incremental update adding some commands and then again a signature which is valid or you can have this add-on case which is invalid that you have some signature and then some actual modifying document which invalidates the previous signatures and then again a signature we can handle all these cases nicely and the advantage of PDF-EF-M is that it can also do not just tokenization but also rendering so we can easily find out we can have some bitmap of the page saying that this is how it should look like without commands and then in case there is some non-signing incremental update we can easily determine if this is really a command thing one because we do the rendering without commands or this is actually modifying the document which means that this is kind of a brute force or high-level approach but it's very effective but this way it's very hard to modify the document after signing without us noticing. So as usual all the work done by Collaborant has to be paid by somebody because we are an open source consulting and product company and in this case most of this work not all of that but most of that was sponsored by the Daesh Ministry of Defense and in core operation with no end of so thanks to them for funding this one. So as a summary of what you can remember is that compared to back to OpenOffice or Times LibreOffice has a very solid digital signing story we handle ODF, Oaxama, PDF, various extensions on top of the standard digital signings we have support for modern hashing algorithms, encryption, we are supposed to be interpayable with Microsoft Office and Adobe Acrobat and the latest news is that we can create visible PDF signatures and also we can do all the verification with just with PDFM code which should be a much more robust story when it comes to validating these signatures. Thanks for listening and I believe right after the call you will have the ability to ask questions. Thank you for watching. Bye bye.
|
LibreOffice was capable for handling PDF signatures since 2016 already. There have been recent improvements in the past year, namely creating visible PDF signatures and reworking the underlying PDF signature verification functionality to use PDFium for tokenization. This means that PDF signature verification (which happens implicitly, during opening any PDF files) now uses much more battle-tested code to provide this feature. Come and see how this work is implemented, where are the still rough edges and how you can help.
|
10.5446/52549 (DOI)
|
you Hi everyone this is Chisco Fabuli and today I'm going to talk about how to write your first test for LibreOffice. I hope everyone is safe and doing okay and yeah let's get started. So a little bit about me, I've been working for the Document Foundation for the last four years as a QA engineer and I find unit tests are really important and a crucial part of the project and any project in general. So yeah the motivation for this talk I extracted this from a talk QA gave in a conference, so QA was a calc developer and he was talking about a refactor he did in calc and then he ended the talk saying that well a bug fix with our unit test will get broken again in the future and then a bug fix with a unit test will remain fixed forever. So the question is which one would you choose and the answer is obvious, we should always go for writing a unit test when possible. So yeah regarding this talk I'm going to talk about two main topics, the first one is how to write your unit test in Python which are mainly used for testing the UI and how to write your test in CPP unit test. Which are, I'm going to explain it how to use it for testing the import and export of different formats, how to use a set X path for testing XML and then how to test the layout of documents. So yeah the prerequisites for someone who is interested in writing a first unit test, well obviously you need the LibreOffice source code downloaded in your computer, then you also need some knowledge in Python and C++, some basic knowledge should be enough. And the same goes for Git, we use Git as a version control, so then you need some basic knowledge to get along. And yeah you also need an accounting guide which is the platform we use to review patches, so if you write your unit test then you submit it to get it and someone else will review it. And last but not least you need desire to learn and I can tell you that if you write unit test you're going to learn a lot. Yeah and a disclaimer about this talk, I'm going to focus mainly on writer test, although if you want to write unit test for other modules like Calcoring Press, the same principles apply, it shouldn't be much different, but yeah this talk is about writer mainly. So let's start with Python UI testing, so some information about it, it was written or implemented by Marcus Mohan four or five years ago and it inherits from the unit test.txt case framework which is the one on the standard in Python. So anything that you use in that framework you can use it in the UI testing framework, so let's say any asset you have in that framework you can use it in the UI testing framework. And as the name says, well it's mainly used for testing the UI, at the moment we have 600 existing tests and in writer they are under SWQA UI test, so let me show you. So I have my LibreOffice source built here, so in SWQA we have everything related to QA and then in the child directory UI test we have everything related to UI test. And the same goes if you instead of SWQA which is writer you change it to SC which is Calc, you have the same and the same for Impress. Well there are not many but yeah. So one disadvantage of the Python UI testing is that they run slower than CPP unit tests, so when possible it's always desirable to use CPP unit tests and another disadvantage is that it only runs on Linux. So if you want to, or either if you work on Windows or if you want to implement a test for a bug that is only happening on Windows, a UI bug then you cannot do it. Yeah, and then if you have or if you want more information, more detailed information about UI test everything is in this URL. So yeah, let's write our first UI test. Let me check if everything is recording on the time. Five minutes. Okay. So first of all, and I'm going to write a unit test, a UI test for you. And I'm going to show you how to do it from scratch. So before we write anything, we are going to call or execute LibreOffice with this variable here, which is going to allow, is going to allow us to record everything we do in the UI. So basically what we are saying is that, okay, I want to collect all the UI information in this document. So let me show you real quick. So now I'm executing LibreOffice with that variable. I close the navigator. So let's say I want to insert the table. And now I change something here. So I click on this text box and then I say, control a, I delete this and I use a new name, which is my table. And then I insert it and I have a table here. So finally, I close LibreOffice. And now I have everything locked here. So let me copy this. So yeah, basically, yeah, what we are doing. So I close the navigator here so we can, we are not going to use this. So basically what I did was to insert the table, then the document open, a dialogue open. Then I type in the element name edit, I type control a, then backspace. And then I wrote my table. Finally, I click on OK in the insert table dialogue and then I have the, I created the table and yeah, I close LibreOffice. So once we have this lock, we can translate this into code, into Python code. And for that, I already have this test created because I don't have much time for this talk. So I'm going to explain you every step and yeah, you're going to see it's really simple. So basically I have a class which I can name it with any name I want. I call it Forstem and then it is from UITestGaze. And then I have the function which must start with test underscore and then the name of the function. So in this case, I call it insert table, but I could call whatever one. And yeah, then we start the test by creating a doc in a star center. So in this case, we are creating a writer doc, but I could also say instead of writer, I can call it colc, colc, or impress, or whatever. So yeah, we already have a writer document created and now what we are going to do is to execute a dialogue through command, which is the insert table, which is this one here. So we execute that unit command and we open the dialogue. And once we open the dialogue, we use this function to get the top focus windows, which is at the moment the insert table. And we put it in this variable, which is x dialogue. Now what's next, we use control A and then backspace and then we write my table. So for that, I can I get the element, which is named it. So I can say, OK, x dialogue, get child, named it. I put it in this variable and then I execute the action, which is type. We see here type and then key code control A backspace my table. So at this point, we already have we change the text box, which was first it was table one and we changed it to my table. So once we done with that, we click on the OK element. And finally, we close the dialogue with this element. So now we are back to the document so we can get the document and analyze it. So we use this get component to get the document. And finally, we can assert that the document has indeed one table, which is here. And then we can assert the table, the first table, the name of it, it's my table. So let me execute this test. And for that, we have this little script, which you can find here in this URL. So basically, you just have to tweak it a little bit to point to your own LibreOffice build. So and then change the file parameter. So in this case, the name of my test is test one. So if we execute this one, we can see that now it launched LibreOffice. It opens right there. It opens the dialogue and then it changed it, changed the name and then it inserts the table. That's really quick. So for that, we can use some time, some time slip. For instance, we open the dialogue here, then we change the name of the table. And finally, we insert it in the document. So now if we execute it again, now it pauses a little bit, it changed the name now, we insert the table and finally it closed LibreOffice. So that's how UI tests are written pretty straightforward. For instance, once we open the dialogue, we can say OK print, which is here some useful prints when working with UI tests. So for instance, if I want to see the children of the dialogue, I can use xDialog. Children. So if I execute it again. Yeah, it's time. But yeah, I see all these elements in the dialogue. So we have, for instance, the OK button, hell button, the warning, name it. So these are all the elements in the dialogue. So then we can use, yeah, if we want to change, for instance, the number of rows, we can use this one here. So yeah, you just need to check which one you need to use and use that one. So another interesting print that we can use is. So once we have the child, we can use getState as dictionary. It's named it. So now we can see all the states of an element. So here this is the name edit element. So for instance, we see the ID, but then we see that it's visible. We see the text of it, which is the property text, which is table one. At the moment, nothing is selected. So yeah, for instance, we can use it like in here. Before we change the name and we change the text box and we write anything, we can assert that by default the text is table one. And finally, another interesting print is to use dir in a document and dir text tables. Let's get the first one. So if I execute it again. Now we see that a lot of information. So we have all these properties in a document so we can get the auto styles or work out or whatever we want to get text frames, text fields. We can use it to get anything in a document. And here we see the properties of a table so anchor, text wrap, whatever we want to use. So this is the way we check the properties of a document while we are writing UI test. So yeah, 17 minutes. So yeah, let's move on. Now we are going to see how to write a CPP unit test. So in right there, they are under this folder. And yeah, we for instance for oddity, we use it SWQA extras, odf import and the layout. So it's very once you are in SWQA extras, you can see by the name of the folder what the testing in those folders are about. At the moment we have around 3000 existing tests. You have a lot of detailed information in the read me and also in this URL. So the way because for the UI test, you can run the test and visualize what's going on. And here you can't. So you have to, well, you can use the make of the module. So for instance, if you are writing a testing over XML export 16, then you can use this command here. Or if and this will run all the test in that module. And then if you want to run a particular test, you can use, you can indicate the name of the test like this one here. And then the name of the module. So yeah, for import and export, we have three kinds of CPP unit test. We have the import test that love the specific file to MX component, which represents the UNO model of the document. Then we have the export test, which basically do the same than same as the import test. But they first import the document, then they export the document, and then they bring to import the document again. And finally, we have the export only test, which only export the document. Yeah, so let me show you real quick how to write a CPP unit test for import and export. So let's say I have a new document and I want to test for instance that when we export this document to, let's say, doc x, the text is still for them. So I have this for them already. And for instance, I can insert an image. So now I want to write it. This is going to, I'm going to test that it's an export test. So I'm going to extras and I'm exporting this to doc x. So OO XML export. And then here we have all the modules too. So I'm going to use export 16, which is the last one. So I can copy this one, for instance, paste it here. And now I say the name of the test is going to be for stem. And then the name of the document is for stem or the T. So I want to check that there is one shape in the document, for instance. So I can say getSaves, the number of shapes is one. And I want to test that the first paragraph is for stem. So for that, I can, and the pages for example, one page, there's one page in the document. And also the first paragraph is for stem. So for that, I can say it's for stem. And here I get paragraph one and then get string. I delete this and now I'm going to execute this test for stem. So let's change it so it's going to fail. I hope it works. 23 minutes. And it takes a while. And now it's, my computer is not super fast. And it fails because 43. Oh yeah, I didn't, the document I created, I didn't put it extras, OXML, export data. And it's called for stem. So now if I execute it again, sorry, I had a previous document with our an image. That's why it's failing here. So it's expecting one shape or one image and it didn't work because there are zero in my previous document. If I execute it again, now, yeah, it says it's expecting for stem. It's expecting X for stem and the paragraph is for stem. So I should, if I change this here, then it should pass. But yeah, let's move on because I have 20, I have five minutes left or less. And yeah, for the asset X path. So here we are using CPP unit as asset equal. If I asset X path. Okay, let me show you another asset X. So here what we are doing, it's a, okay, I'm importing this document, then I export in it. So in this case, I am exporting only. So this is an only export test. And then I parse the export, I put it in this variable and then I parse the variable. And this is how the XML looks like. And this is what I'm, I'm asserting. So how to get this? So one way of doing that. So let's say I have a document. So I'm going to use this one for stem. And now I change this and I say, let's change the answer. Whatever. I change it to force them. First, them with bold letters. And I change it to force them to audit. So now I can use one tool. You have some of the tools to analyze in the documents here. So one of them is or diff. So these tools, this tool allows me to diff the two documents. So I'm going to test the first document without the bold letters and the second with the bold letters. Yeah, this is information about the generator and here, papa. Yeah, there is one image and another that is not there. Another document without an image. But yeah, with that, I see here, for instance, that in one document, we have, yeah, this is a text style. We have forced them and in the other one, we don't have it. We also don't have the image. So then we can use this XML to parse, which is an asset that, for instance, the document is there. So this is one way to use XML parsing. Finally, if we want to test the layout, we have this. So, okay, now, so now I have this document here, which I can open and now I have the layout as an XML document. And yeah, that's 30 minutes and then you can also assert it's path, the document. So some useful information. If you want to write your first test today, you can have it, you have a list of missing Python UI test here and the same for the CPP unit test here. And thank you for watching and hope you write your first unit test. Thank you. You're not going to regret. Bye-bye.
|
This talk will walk you through the process of writing your first LibreOffice unittest either with Python or with C++
|
10.5446/52551 (DOI)
|
Hello, my name is Michael Stahl. I work for a new company called Allotropia and I'm going to give you a status update about editable nested text fields in LibreOffice Writer. First, what are fields in Writer? A field is a magic character inside of paragraph text in the document model. At layout time, this magic character is then expanded to generate some piece of text. This generated text is basically plain text. It cannot have formatting inside or anything fancy like that. So it's a very simple mechanism basically to generate text. On the other hand in Word, fields are a bit more flexible. So a field is represented in two parts. It has a field instruction and a field result. Both of these parts are user-editable text that can have internal formatting applied to it. Other fancy things like paragraph breaks. You could put tables inside of either of these parts and you can even put fields inside of the field instruction or inside of the field result so you can have nested fields basically. The letter is very useful for conditional text for example. Word uses fields for all sorts of different things because they are so flexible. For example, an index like a table of content is a field in Word. Whereas in Writer, there are further more complex use cases. There are entirely separate objects in the document model. So there is some, an entire class hierarchy of indexes for example that have nothing to do with fields. So for most users, this difference is not all that important because they don't need the flexibility that Word fields provide. So it's just an implementation detail for most users. But some users need a bit more flexibility and want to edit, for example, the field instruction to get exactly the result that they want to achieve. So for those power users, Writer has been a bit limiting in that regard. So what is the situation in Writer with regard to the Word kind of fields? It turns out that many years ago already during OpenOffice.org times, an experimental implementation called fieldmarks was added, which is a representation of Word fields in Writer's document model. And it is implemented as a kind of bookmark. And the most important, there are several different kind of fieldmarks and the most important one is the text fieldmark. The way this works is that it has two dummy characters, one at the start and one at the end. And then between those dummy characters in the paragraph text, there is the field result. And the field instruction is not stored in the paragraph text. It is a separate string property that is associated with this fieldmark object. And obviously an obvious problem with this is that it cannot actually represent Word fields with full fidelity. Because as I said, Word fields can have any sort of content basically in the field instruction. And well, here it's just a plain text string. So, yeah, it was insufficient feature-wise and the implementation was also very renowned for having a lot of bugs and funny behaviors. So, yeah, that was basically the state we found when we started this work. So let's have a quick look at a class hierarchy diagram for these fieldmarks. And what you can see here is that the fieldmark implementation class is the one in the middle with the gray background. And it's derived from two base classes. The one is a mark base, which is the base class for all bookmark implementations. And the other one is an interface that is specific to fieldmarks. And then it has several subclasses. The one that interests us on which this entire talk is about is the text fieldmark. And then there are a couple other ones, like for example, the checkbox fieldmark, which is a bit different because it is not a range. In the paragraph text, it is just a single character and it can't do anything other than paint a checkbox. And then we have these two at the bottom, which are also a bit special because they essentially paint UI widgets in the document view. So for the drop-down, it would pop up a drop-down selection widget. And then there is this date fieldmark, which is an oddity. Let's put it that way because in Word, this is not actually a field. It's an SDT, a structured document tag. And those do not have a field instruction. So, yeah, so much for that. Now, what have we actually done about this? And all of this work is in the release 6.4. So we have first added a third dummy character for the field separator. So this means that we can now insert the field instruction also inside of the paragraph text. And with this, we can represent nested fields. We can have a fieldmark inside of the field instruction now. So that solves our major feature gap here, basically. And then we had to fix a lot of problems with these field characters being accidentally deleted while editing the text of the document, just by ordinary backspace or whatever. And, yeah, this is now fixed these sort of problems. Another issue was that for inserting or removing the fieldmark, there was no working undo and we fixed that too. And generally, there was a problem with the previous implementation with some bad programming practices, like there was lots of defensive programming to prevent the implementation from crashing when things were in an inconsistent state. But the problem with that is that it just makes it more difficult to fix bugs, basically. So we added lots of assertions to check when invariants are violated. And with the unit tests already and then again with the automated crash tests, we found lots of documents where crashes happened and we fixed them. And one particularly funny issue was that several import filters in Writer were able to insert control characters into the paragraph text. So Writer uses some magic characters for a special purpose, like its own text fields, for example. And, yeah, this could turn into an issue, so we are glad that we are now preventing this. So another thing we have done is we have added a configuration setting that basically forces all worm fields in RDF and.x import to be imported as field marks and not converted to writer fields. And this is not enabled by default because currently usability wise, it's usually better to convert the fields to writer fields because those can be expanded and it's more user friendly that way. But if you want to avoid any kind of data loss, then you can use this configuration setting. So that was for 6.4 and more recently we have added the ability to show or hide the field results and the field instruction, just like Word does. So you can use the field names menu item and this will be in the 7.1 release. And basically, you can see a screenshot on the left is Word with the field instruction on the top and the field result at the bottom. Yeah, well, the field result is actually some text I manually typed in for reasons that I'll explain later. And, yeah, on the right you can see the same document with writer and basically it looks the same now. And what remains to be done here is that currently writer is not able to actually expand the field instruction into a new field result. And this would be quite a bit of work. So, yeah, well, we hope that maybe someday we will have some funding to do this, but not currently. So, could also quickly look at this in writer itself. So this is this document and I can now select something and apply formatting to it or type something into it. And when I toggle this now, it shows the field result and this is also editable. And toggle back and we see the field instruction again. So, that was my presentation and thank you for listening.
|
Word fields are far more flexible than ODF/Writer fields - to most users, the additional expressiveness doesn't matter, but for some specialized use cases, Writer is limited. Writer had a rudimentary implementation of Word compatible fields for about 10 years, but it never worked well. We have added the ability to round-trip nested Word fields in RTF/DOCX formats, and fixed a bunch of long standing bugs along the way.
|
10.5446/52552 (DOI)
|
you Hi, I'm Tomas Wajnder. I'm from Colabora and I present you a built-in X-ray like Uno Object Inspector. First thing I would like to introduce what Uno Object Spectres are. The best known tool for Uno Object Inspector is X-ray tool, but there are also other tools like EMR. There are also some tools that serve as a code example in UDK. So what is an object inspector? Object inspector is for inspected object. It shows what kind of properties Uno Object supports, what method it supports, what interfaces it implements. And of course it also shows the values itself. So now we have this X-ray tool, how it looks like. This is an example and if you can see left out there is properties, methods, services, interfaces and listeners that we can inspect. And on this list view we see what kind of methods and what kind of parameters we can put in and what on which interfaces this method is implemented. There are also properties, it shows all the properties and so on. This is generally what an object inspector is. So what's the problem with X-ray tool? X-ray tool is an extension for a liberal office. So it's not available by default when it's installed. You have to go and find it yourself and many don't even know where to find it or even know that it exists. It's also very difficult to run X-ray because you have to go into macro editor and run it. So you have to say something like X-ray, this component, which this component could be current document. Or if you want to go and inspect one specific sheet, you have to get the documents, get all sheets and then select the sheet you want and then say X-ray, my sheet, whatever this sheet is. And this is not something that newcomers to a liberal office will know. This is something a little bit more advanced. And we can do better than this. We can make it a lot more simpler. So this is why there was an idea to implement built-in liberal office object inspector. So the idea came and but there were many years nobody implemented anything like this because it's not an easy task. So TDF put in, saw that this is very important and put a tender up for implementing these two and Collabra was selected for implementing this standard. So of course thanks then to TDF to make this the work of this tool possible. It will probably be very helpful for a lot of people. Yeah, so let's talk about what the idea is, the idea what we want to implement. So we want something like some development tools that are built in liberal office that we have out of the box when we start liberal office. The idea is that we have a dockable window on the bottom of the document so we don't have to go into macro editor and manually say which object we want to inspect. By design it's very similar to developer tools that are in popular browsers. So what we want is something like two trees on the left hand side and the right hand side. So on the left hand side, what we want is just to have a subset of the document object model. This is like just for example document you want sheets in a calc. We want slides in in impress you want paragraphs or something like this in doc in a document. And then on the right side, we want a tree view. Again, which is the object inspector where we can see the current selected object and inspect its values inspect is what method it supports and so on. And then also a point and click functionality. The form of point and click functionality is that we can just select everywhere anywhere in the document and inspect the selection inside our object inspector. So a little bit more detail about the left hand side document object model tree view. This doesn't show anything but just a small set of the dome as I said before. But the user then can just looked at the tree and select one of the objects that he's most interested in. This is easier for like more, more newer users because they don't have any idea how the document object model is structured. So it's easier for them to navigate it and see what the object is. They can quickly go to the shape they want to go to the style and inspect it, for example. So the user can then select the object. He's interesting and this one then we show you the object inspector and he can go from there on. So what we want on the right hand side, this is the document object tree view. This should be similar that it's already existent in the macro editor which is at the watch window. Which already shows us like a three view of objects and our values and so on. Probably exactly what we want as an object inspector. And of course with this watch window we can traverse the whole tree and go very deep inside the document object model. This is possible. And if there are for example collections and arrays, this is something that watch window can handle and we can see all the elements of the arrays and also the named arrays, what are the names, what are the values and we can go inside and just traverse everything. And as I already said, there's also the point and click functionality that we want to select or click on a specific object in the document and this should be then shown in the object inspector. So what's the current state of this? Because this is not finished, we are still in the process of completing it. The functionality is not yet done but let's talk about what is already done and see then what still needs to be done. So currently we already have the docking window. We can already enable it under help with development tools but this can still change and the location can still change. And when we enable it, we get like bottom docking window which has currently already a left side and a right side. On the left side it's already mostly implemented the view of the document object model. But the point and click functionality and the right side document inspector are still not yet done, it's still a work in progress. So on the left hand side what's implemented already is a lot of unobjects have already been added. So we get a root object which is document and then in writer we have paragraphs, shapes, tables, frames, graphic objects which are different than shapes and embedded objects. We have all objects which are for example charts and there are also style families which then have sub styles as subcutory. For Cog you have access to sheets and then you have per sheet you have what are shapes in one sheet, then what are charts in a sheet and then what are table in a sheet. Then also like in writer you have styles and style families. Draw and impress are very similar, in draw you have pages, in impress you have slides but these are more or less the same thing. And then you can get shapes which is per page or per slide and master slides and again which are the style families and styles. So on the right hand side currently already shows implementation name, it has a list of interfaces, properties, types and methods but this don't show the values, it just shows the basic information. It doesn't currently use the same code as watch window but I'll try to make this use the same code if this is even possible. So as I said there's no values and no types yet but other things are. The UI still needs working, maybe there should be not just one tree view but maybe this should be different for each property, types and methods. Currently everything is one view. This will change in the future, make it easier to work it. And point and click, okay, in the point and click. Point and click is what is done is that we have current selection inside the left hand side and this is what object is currently selected. Okay, for the last thing I want to give you a demo. Okay, here we have a document that I've written and it has a lot of everything. So it's a very good example to see how we can inspect all the objects. So to enable the developer tools we can go to help and we have here development tools. We can open the the the docking window and we have as you can see this left hand side and the right hand side. On the left hand side we can just click for example on the document. This is the real document and so we can see the implementation. Here and we can already see what services it's implemented, what are the interfaces that the object implements. All the properties what we don't see is, as I already said, is the values and all the methods. All the methods also means what are the input parameters and what it returns. But this is still working progress. So now we go back to the left hand side we see that we have paragraphs here and we can just paragraph one. This is probably up there and paragraph through but when we click the objects, the object is inspected in the right hand side. Because we currently don't show any values it's hard to tell one paragraph further than another but there is a difference. So for shapes we have a lot of graphic shapes, text graphic objects, we have frames. Actually these shapes have many many things inside that are also later can be selected independently. So then we have tables or we have one, two, three, four, five, six tables and we can just select any rule to the table. And we can then inspect it see okay what this table supports what are the methods. And then frames, same thing, graphic objects, embedded objects. There are some embedded objects in this document so they are shown here, text embedded objects. And then styles, these are probably the same styles that are not style inspector but styles these are very similar styles. And here paragraph styles we can see standard heading text bodies like alternative names to one of these, a lot of these styles. Then here page styles which are also correspond to this and so on. So when we click on the one of these styles then we see okay this is we now inspect this style. So similar thing for Excel Excel, so similar thing for Calk. So for Calk we have a similar situation in Calk we have main document and then we have sheets. And what and she has shapes charts and pivot tables. Currently on this sheet there is no pivot table but we have one chart and we have two shapes. There are two shapes because a chart is also regarded as a shape. So probably if we delete a chart go back there is only one shape and no chart. On the second sheet there is the pivot table and we can go here and see okay this is pivot table and we can again on the right hand side inspect what are the properties and what method it supports and what interfaces it implements. And again we have cell styles and page styles which you can also inspect. Okay that's it thanks for listening. I hope this was entertaining and bye. Thank you. One question was if this can be used for collect use. The problem is that this is mostly just for inspecting the objects and what we really need for collecting the use is something that is listening to the user interface commands and to log them inside a file and then send them at convenient times so it's not really related to this. So any other questions? Maybe now because this was recorded quite some time ago now I already show objects on the right hand side so I get the code inside the master quickly and anybody can check it out and tell me comments. Okay. Yes, the question is if the behavior is similar like a browser. Yeah, quite similar but it's liberal. I don't think it's so much different at x ray but it's a little bit more integrated inside liberal itself. Anymore. Connecting it to the navigator that will be interesting. I'm not sure about code condition in my credit. I don't know what the code is but it's kind of working but I'm not familiar with it. Okay. Okay. Okay. Okay. Okay. Question is about the style inspector if it's part of these. These final specter is different things. Okay. Okay. So, that's a tool for developers, micro developers or somebody that develops extension. The style inspector is somebody that works with the big document. Okay. Okay.
|
There have been many implementations of different object inspector tools for LibreOffice, most popular being xray and MRI, but they were only available as extensions. This is an invaluable tools to inspect and to better understand the structure of UNO objects, which is particularly useful for writing macros or extensions. The problem is that the existing object inspector tools are not so simple to use, because of their nature as an extension and in addition the user has to search and install it in addition to LibreOffice. For this reasons TDF has offered a tender to implement a build-in tool, that can be more integrated into LibreOffice and is always available to the user. The tender has been awarded to Collabora and we are in the process of implementing it. In this talk we want to present the tool, what has been done so-far and how it will look like when finished.
|
10.5446/52553 (DOI)
|
frontline I will talk about the basic of interoperability based on the open document format. The open document format is the only open standard for document formats and is the native document format of LibreOffice and other products. Wog, institutional pj mediocre for interoperability. Although<|fi|><|transcribe|> New diderpaperability no one is immense on the concept of interoperability. We are not using standard formatting and funds. fighting alexa from kod d ton sele negotiated d company kod d ton n Paste standard and they have been developed to limit reduce interoperability. So we don't know about XML contents of files, which is the basic of interoperability, and then when we open documents we complain if the document is messed up. The problem is that we completely ignore the fact that the document has to be created in a specific way to be interoperable. The basic of interoperability are simple. The user, no pen document standard file format. You have to focus on contents and not on visual aspect. No fancy fonts, no fancy formatting. If you want to have a fancy formatting that is for printing, it's not for sharing the document with other people. Use standard, which means repeatable document elements. Use a specific template for each task and use free fonts available on any operating system. Document characteristic have to be predictable. If you create a page in a specific way, you have to be able to understand how the page will show up on your computer. Unfortunately, hardware standard help us because they force for the form factor to adopt the standard. Softwa standard are exactly like plugs. The standard is behind. All these are plugs which do not seem standard. The problem that the standard here is electricity, which is behind. Independent from the plug, you can use a power supply for use a smartphone almost everywhere in the world because the standard of electricity is respected. We discovered the importance of the standard for documents when HTML was created. HTML is a standard for web documents, but the fact that it was developed as a completely independent standard, independent from a vendor, independent from a platform, independent from an operating system. Today we have a format which is able to show content through a browser independently from what created the content, the platform where the content have been created and so on and so forth. This thanks to Tim Berners-Lee and the worldwide web consortium. Where are standards to share documents? We document have to be created using open data and document formats, open document formats. They go through the software and the software shows them through the user interface, but shares them using open protocol for interoperability. This is not visible to the user, so the users have to choose the right components to get to interoperability. Unfortunately, what the majority of user is adopting are not the right components for interoperability. There is only one true document standard for interoperability, the only one which offers freedom of choice and is the open document format. The format of the library office but also of other office suite including Microsoft Office and other free solutions. ODEF is based on a very simple philosophy, it was designed to be neutral from vendors and to use existing standards wherever possible. We use standards, the easiest one, the most common one, the one that everyone knows is the calendar. ODEF is using the Gregorian calendar to represent dates and it represents dates as they are. Other formats they show dates as they are but they don't represent dates as they are. Microsoft Office documents are not compatible with the Gregorian calendar. Of course, if you use a standard, this means that the software has to adapt to the standard because this is the only way to provide interoperability. ODEF has been tweaked and continues to be tweaked to respect the open document format standard. Other formats represent the software, open document format represents the user. The basic concept of ODEF, the format is solid and robust, is consistent across operating system, is truly interoperable and is predictable. If you create a document in a way, it shows the same way in every operating system. ODEF is a better standard format for user of personal productivity software. There is no other format which provides the same quality. Open document format is independent from a single product, is interoperable as it allows the transparent sharing of data between heterogeneous system. It is neutral. You don't have to adopt or buy which is even worse a specific product but you can choose it based on your features, quality, on your preferences. And it's perennial. The concept of a standard is not backwards compatibility which is something that proprietary software vendor want to convince us, it's a good choice. There is no backwards or forward compatibility. There is a standard. The standard doesn't change. And therefore new features of the standard will be always compatible with previous features. You will be able to open an old standard document with a new software because the components of the standard will be the same forever. So ODEF is decoupling content and software. The proprietary solutions, software and content are strictly related. So to get the content in a certain way you have to use a specific software or a clone of that software which is worse. If you want to have interoperability you have to use a software which is decoupled from contents and it's getting to content through the standard, through the standard format. The standard format is controlled by the user and not by the software vendor. The Office Open XML philosophy, Office Open XML is the Microsoft Office format has been designed to represent what is created by Microsoft product and to interoperate with the Microsoft environment. There is no thinking for interoperability. So when interoperability is achieved is not because the software respects a standard but is because it mimics the behavior of Microsoft product and this is not interoperability. In addition we have to make it clear that the Office Open XML format used today in 2020 or in 2021 so the Office default for DocEx, XLSX and PPTX format is transitional Office Open XML which is recognized also by ISO as a proprietary document format. That was created as a bridge from the previous old legacy Microsoft format and the approved ISO standard but unfortunately the approved ISO standard which is called Office Open XML strict is not adopted by user because it's not publicized and is the last option available when you choose a file save us in Microsoft Office format. So 100% of existing Office Open XML file we are referring to and we are using our proprietary file. They are not standard file, they are proprietary file. So there is a strategy different between ODF and Office Open XML. ODF has been designed from the ground up as a document standard to provide interoperability for the next 2050 and maybe and hopefully more years and to liberate users from the locking strategy built into yesterday and today, today proprietary format. Office Open XML has been designed as a pseudo standard to propagate yesterday document issues especially lock-in for the next 20 to 50 years and it's going against users and interoperability. Documents format is extremely important. Documents are one of the most important object that move from citizen to government, government to government and from government to citizens. So updating a reproduction of these documents is extremely important. A common problem is that documents governed by pseudo standards are locking users into a particular platform, proprietary operating system and application. So the document format can be a blocker for the development of interoperability, can be a blocker for sharing knowledge and data in a transparent way. Government should be platform independent and allow only document standards. Some governments have made the decision of supporting true document standard but the reality unfortunately is that they are not created a way to control that the true document standards are really implemented and therefore there is still too many pseudo standard document around. The real, the unfortunate situation is that these tweaked pseudo standards for citizen to pay to create documents as they have to purchase a proprietary license or they have to accept the intrusive conditions of the cloud based platform. If you are using a free cloud based platform you are the product, which means that what you create with software is that you get filtered and stored to profile you as a potential advertising buyer and to profile you also in terms of privacy and security. So only standard associated to free open source software can solve this problem. Open document format is simple as no complexity and XML files are human readable. On the contrary, Office Open XML format are integrate the highest possible complexity allowed by technology and XML files are not human readable and this is what the XML standard tells not to do. So standards are key for interoperability and interoperability is the ability of users to share content through information and communication technology systems. There are several definitions Lego is a proprietary product but the reality is that it is a proprietary product with a standard way of interaction and this makes it possible to use Lego bricks 60 years old today because the interoperability is the same over the years. So the standards are key for interoperability and have to be designed and tested to ensure that interoperability is respected. And where are we with interoperability? We have all the technical and the syntactic elements. Unfortunately, when we go to the semantic, there are technologies, counter-alls that are working around and tweaking the semantics of the standard to make it non-standard. And then when you get to organizational level, you have the human resistance to change. This is what we have to work and solve to get to full interoperability. And to get to interoperability, users have to understand that there must be a change of paradigm from an analogic document which are focused on the printed version. They retain the visual aspect and are created for others to read to a digital document which represents the future and is focused on the exchange of contents. So we don't have to create a document for printing. We have to create documents for sharing. We have to create a document to preserve the contents and not the aspect. The content is where the value is. The aspect is just an additional element of the document. So we have to create a document for others to add value. And there are several elements of interoperability. Some of them we have already covered them. Application must offer templates and defaults for interoperability. LibreOffice offers templates and styles which make interoperability easier to achieve. For organization, it is important to adopt a single standard document format. Adopt application which are recognized for the conformance to the document standard. And train users on how to create interoperable documents. For users, it is important to capture information at the IS level. Add document metadata. Provide a notation for accessibility. This, of course, is necessary if you have to interoperate with people with limited physical capabilities and use styles for your document. And that is important. There are tricks and tips that can be added. The time is not enough to provide everything, but the LibreOffice community can help you in achieving interoperability. So thank you for listening. And I hope to meet you in a physical event to talk about open standards and interoperability in the future. Thank you again.
|
ODF, LibreOffice native document format, is the only standard file format which allows full interoperability. ODF is robust, predictable, resilient, well documented, and based on existing standards. It is the perfect answer for digital content sharing.
|
10.5446/52556 (DOI)
|
Hello, my name is Kuniya Suzuki.The title of my talk is Trusted R-Lubin 64-bit RISC-5TE with Secure-Co-Processor as Root of Trust.I will talk about hardware and software implementation of RISC-5TE and Secure-Co-Processor.This is joint work with researchers of TORADIO, ASD, SECOM, and NSITEX.TORADIO is research association supported by other organizations.TORADIO has a national project to develop RISC-5 security since 2018.I will talk about the current results.This slide shows today's content.At first, I will introduce TE and RISC-5 briefly and indicates four issues.ROOT OF TRUST, PROGRAMMING ENVIRONMENT, TA, TRUST APPLICATION MANAGEMENT, and REMOTOR STATION.After that, I will introduce security technologies offered by TORADIO.They are related to the four TE issues.They include hardware and software security technologies.At last, I mention future works and conclusion.I want to introduce TE.TE is one of CPU's educational environment isolated from OS. However, TE is not only one isolated execution environment.For example, SMM, INTELME, and TPM are also isolated from OS.The main difference is that TE offers programming environment to the third party.Normal users can use it for their critical processing.TE separates the execution environment into two worlds.Normal world for normal OS and secure world for critical applications.They are also named RE and TE.Popular CPU has TE.For example, INTELES-GX and TRUST zone and AMD-SV.RISC-V also has TE.The detail is explained in next slide.RISC-V is an open ISA, Instruction Set Architecture, maintained by RISC-V International.RISC-V has modular design and users can add some extensions.So RISC-V become popular and there are many FPGA chip implementations.RISC-V is a smart-assisted open source.TE extensions are also implemented on academia and industry.We use Keystone among them because it is an active open source project.From here, I will show four problems on TE.The first one is root of trust.TE is just an isolated execution environment and cannot keep keys and certificates.Assoc Zekyako processor as a root of trust is needed to keep keys and certificates.Remote attestation I explain later must be based on root of trust.So many CPU which have TE include root of trust.For example, INTELES-GX has INTELME and AMD-SV has PSP.RAM trust zone does not have root of trust but there are IPs for root of trust.The situation is same on RISC-V. Unfortunately, the detail except OpenTitan is not open.We cannot verify them and need to trust them.In other words, OpenTitan is our competitor.The difference is that we co-design TE and Zekyako processor which explain later.The second problem is that TE has no common programming style.HTE has each SDK. Some of them are used for plural CPU architectures but they are not general.It means there are no compatibility and no portability for different CPU architecture.Among them, Global Platform TE internal API is designed for CPU independent.It is used for ARM trust zone on smartphone mainly but we extended it for other CPU architectures.The third problem is management of TA, trust application on TE.The management means installation, update and delete of TA.A TA is developed by a third party but the supplier and client want to confirm the safety of each other.The installation, update and delete must be based on the trust establishment from theView of Platform and TA.It means that management of TA must be safe.Unfortunately, each CPU has each security procedure to run a TA.Therefore, the management must follow each CPU security procedure.The host problem is remote attestation which certifies the hardware platform and target software.It verifies that a genuine TE runs on a genuine platform.It resembles to the third problem but remote attestation is the basement of Action Nob install, update and delete.The remote attestation depends on device keys and certificate which is managed safely byroot of trust.From here, I will talk about security technologies offered by TORUSIO.They are designed to solve the mentioned 4 problems, one hardware solution and three software solutions.The first one is trustitrv platform consisted of 64-bit risk 5 and 32-bit risk 5 security processor.I will talk how trustitrv is designed.This slide shows the normal 64-bit risk 5.It runs Linux but no TE.Risk 5 has 3 prefabricate mode,machine mode,supervisor mode and user mode.Linux and application use supervisor mode and user mode in general.This figure shows Keystone which is the risk 5's TE.Keystone utilizes the mechanism on PNP physical memory protection which isolate memory for an execution entity named Enclave.Dotted line indicates an Enclave protected by PNP.The creation of Enclave is managed by security monitor which boots before the creation of RE and TE.In this figure,one Enclave is used for security monitor which runs on machine mode,the basement mode of risk 5.One Enclave is used for Linux which runs on supervisor mode and user mode.Two Enclaves are used for trust application in TE.TE's Enclave has a runtime monitor which works as an OS kernel on supervisor mode and trust application on user mode.Unfortunately,the original Keystone has no root of trust to keep device keys and certificates.This slide shows the trustRuby which has Keystone with SecureCoprocessor.The 64-bit risk 5 core is named up core.The SecureCoprocessor is one core 32-bit risk 5 named SecureUnit.It communicates to up core with interrupt and shared memory.SecureUnit has a machine mode only and runs Zephyr real-time OS.SecureUnit has a secure storage to keep keys and certificates.This figure shows FPGA implementation.We use Xilinx VC-707 to implement trustRuby.SecureUnit has own peripherals isolated from up core.The peripherals include SPI Flash, real-time clock, UART, and compact flash.We also made a simulator to develop the system software.It is based on MPS Risk 5 simulator.The simulator offers windows for up core console and SecureUnit console.It also has an indicator which shows activity of cores on up core and SecureUnit.This slide shows the software structure for SecureUnit.The communication to the SecureUnit is limited.SecureUnit can not connect to SecureUnit directly.The communication must be via trust application and SecureMonitor.The critical information is managed each level.This slide shows security layer of T and SecureCop processor.TE and SecureProcessor can add one security layer from OS on RE.They offer some design choices.TE and SecureCop processor can set up parallel and allows the direct access from OS.It may be useful from the view of OS.And on the other hand, trust it RB select two layers which stack SecureCop processor onTE.This structure makes critical information for from OS.Two layers structure will cause additional delay but makes wide design choices for critical applications.The other layers structure will cause some applications needs the performance on TE and other applications needs the strict and secure processing.This table compares the other SecureCop processors.The targets are Google Open Titan, Lombards Risk 5 Crypt Manager, Sylex Insight E-Secure and our trustit RV.This table compares coimplementation OS on SecureCop processor,communication method to the main CPU, Accelerator, Peripherals, Antitampling, Target and MISC.The designs are slightly different but almost same because they know the requirements for SecureCop processors.The main difference of trustit RV is that the design assumes TE on 64-bit risk 5 namely Keystone.Trustit RV assumes that critical processing and critical information on SecureUnit is used by trustit application on Keystone.From here I will talk about software security technologies.The second solution is for TE's programming environment.We use global platform TE internal API.The API is CPU independent and popular for smartphones.The examples are shown in this slide.Kinbi and QSE are the trustit OS on Amtrafzone which offers GT internal API.The numbers are high in smartphones.In addition, there is open source implementation OPTI.We have developed some applications on OPTI and want to port them to interest.gx and risk 5 Keystone.It is our motivation to develop a popular library for the GT internal API.We design the GP internal API library to be portable for risk 5 Keystone and interest gx.We reuse existing SDK to implement the library.The library offers new abstraction and hides the SDK detail.To implement it, there are some implementation challenges.The first one is the combination of GP internal API and Cypher Suite is too many.So we pick up some important GP internal APIs.The second challenge is that some API depends on CPU architecture.We separate API into CPU architecture dependent or independent.And we implement them properly.The third challenge is to integrate GP internal API to SDK because SDK includes EDL, Enclave Definition Language.EDL creates the code for Oracle, request from TE to RE to check the pointer and boundary.Oracle means the need the help of Linux.The library must take care of the EDL.We separate GP internal API into CPU architecture dependent and independent.This table summarizes the API categories and functions.The detail is described in the references.Please read or watch them.The third solution is for TA Management Framework.The TA Management Framework is proposed by ITF as TIP, Trusted Execution Environment Provisioning.TIP is a protocol to manage TA to install, update and delete.This figure shows the management components.On each device, there are TIP agents in TE and TIP blocker in RE.On server, there is TAM, Trusted Application Manager.The components use keys to certify the TE, TAM and TA.This figure shows the implementation on RISC 5 Keystone.The detail is presented at 4thDEM 2021 by our colleagues.The fourth solution is Remote Attestation. Remote Attestation is mechanism to offer platform authentication, platform integrity and binary integrity. Remote Attestation is achieved before the execution of TA and keeps the safe execution of TA on the TE.It compensates the TIP's Management Framework.RM Assume some conditions on each and server.The each platform must keep the keys and certify safely on root of trust.The server must know the information of hash of TA and device public key.RM RISC 5 Keystone has the mechanism for remote attestation.The right figure shows the prerequisite and procedure.Unfortunately, current Keystone does not assume the root of trust.So,implementation keeps the device keys on SD card.Our trusted RB has the secure unit as root of trust.And we design remote attestation using it.Up to now we have developed the infrastructure of RISC 5TE and SecureCo processor from hardware and software.The next step is the creation of Poc, proof of concept for real usage.We have some plans.On server, the target is machine learning to protect code and data.On edge, the target is real management and privacy management for smart city.I guess the development of Poc shows the shortness of our security infrastructure.I think security flow is the main concerns but I guess there is other issues.3.Our current implementation does not care about performance.However, real application has performance requirements.The performance evaluation is needed for real usage.I want to conclude my talk with two take away points.1.TE just an isolation indication environment.2.TE needs a root of trust for critical data keeping and processing.In order to achieve the TE requirements,Tradio offers four security technologies by hardware and software.Thank you.
|
Trusted RV is a combination of 4 core 64bit RISC-V (AC: Application Core) and 1 core 32bit RISC-V Secure Coprocessor (SU: Secure Unit). The SU works as a "Root of Trust" and keeps critical information (e.g., Device Key, Certificate). The SU boots before the 64Bit RISC-V and controls it (i.e., secure boot, etc). The communication from the AC to the SU is limited for TEE (i.e., Keystone Encalve) only and keeps security. Trusted RV is implemented on an FPGA (Xilinx VC707) and a simulator. We have developed the Trusted RV which is a combination of 4 core 64bit RISC-V and 1 core 32bit RISC-V Secure Coprocessor. The secure coprocessor works as a "Root of Trust" and keeps critical information (e.g., Device Key, Certificate). The secure coprocessor offers machine mode only and runs Zephyr OS. The Zephyr OS includes crypto and certificate-verification libraries and manages the 64bit RISC-V. The secure coprocessor boots before the 64bit RISC-V and verifies it. The secure coprocessor is tightened with RISC-V Keystone on the 64bit RISC-V to keep security. The 64bit RISC-V runs Keystone as TEE (Trusted Execution Environment), and Secure Monitor (SM) runs on machine mode under the Linux kernel. The secure communication between 64bit RISC-V and Secure Coprocessor is managed by SM only. The communication is passed to a Trusted Application (TA) in a Keystone Enclave only. The secure communication is implanted on the shared memory and mutual interrupts between 64bit RISC-V and Secure Coprocessor. This mechanism is also used for the remote attestation of Keystone which based on the key in the secure coprocessor. The Trusted RV is implemented on a simulator for software development, as well as FPGA (Xilinx VC707).
|
10.5446/52277 (DOI)
|
Welcome to the Zellchen stage of the RC3. This has been a very unexpected year. The new airport in Berlin opened and a global pandemic killed 1.7 million people. As this year is very different, this Congress also is very different for us. This year, the open infrastructure orbit and the About Freedom assembly cluster joined forces to organize this stage here in Berlin together. Before we tell you other information about that, right at the beginning there's a possibility for you to be in the virtual audience. Under this link, audience.rc3.ojo.social, you can join the audience and be displayed with BIMAS installed so that we and other speakers later can see you while having talks. So now more to us, the assembly cluster is organizing this. The About Freedom cluster is a cluster of assemblies that focus around technology, human rights and climate justice. For example, this is what our cluster looked like at the last camp. And do you want to talk about the OEO cluster? Maybe you remember the harbor of the open infrastructure orbit last year and the year before in Leipzig. A lot of ships have been there and now some of the members that took part in the open infrastructure orbit in Leipzig are also here in Berlin. Yeah, this is what the stage looked like last year. And now we come to a short introduction of the members of these organizations that participated this year in building the stage, building this cluster and Andy's going to do that. Yeah, the first one is completely new to the open infrastructure orbit and About Freedom. It's the CCCB, it's the Berlin branch of the Chaos Computer Club, the largest tech association in Europe and maybe in the world. I'm not sure about that. And some members helped us building this wonderful room and the stage and organizing all that you can see us now live. There's also some remote help from C3 space, they helped us in designing things and can't be here because they are not located in Berlin or somewhere nearby and so they maybe have another assembly in the RC3 world. Maybe you can discover them there. And next one is... This one is called Concept Börg Neue Ökonomie or in English something like Concept Space for New Economics. They focus on alternative economic concepts and they try to create alliances, collaborate with social movements and do educational work. And there's open source gardens. Open source seats are seats that everybody can plant and the open source garden project is a community of open source enthusiasts who plant open source seats in their gardens and in community gardens. And another member is the Torre Network, the anonymity network. You maybe all know where you can surf the internet in an anonymous way. They can't be here too but they have Torre tomorrow or the day after tomorrow you will see in our schedule. And now we have a video introduction of all the other organizations that are part of the open infrastructure orbit and about freedom cluster in this year. Hello to all of you. We are Digitale Freiheit. We are a bunch of young people who are committed to privacy and informational self-determination. We want to show that data protection is very important and necessary for a healthy democracy. And mass surveillance, we see a problem that concerns all of us and not only the people with a computer science degree. We started in 2017 as a student organization at university. Now we are a registered non-profit organization. We started with interventions. For example, as facial recognition test started at the train station Sudkreuz, we built camera hats and for the behavioral test at Sudkreuz we posed at X-ray skeletons. And as our minister of interior wanted to expand governmental and police hacking, we posed in underpants in front of the ministry. But we also think it is important to bring organizations together and to organize broad protest against current surveillance expansion plans. For example, when there was a draft law that wanted to introduce facial recognition systems at airports and train stations, we initiated the Alliance Gesichterkennung-stopping.de which means something like bandfacialrecognition.de. In that Alliance we demand with other organizations that facial recognition technology has to be banned in public places in Germany. We always work on content production themes like campaigns and texts, but also on things you can simply participate in, like printing t-shirts, organizing art or photo actions, including leaflets of secrets. It's actually quite easy to come by, to just drop by in one of our meetings. We will talk about net politics, but also plan new actions and there's always something to do for somebody who's just starting out and is not super familiar with all the vocabulary and stuff. And on plus, you can learn from us and we can learn from you. Actually we are meeting on JITSEE every Thursday evening and when the whole pandemic situation is over we will also be meeting again at the TU. Drop by, you can find exact dates at our website again. Hello I'm Matthias Kirschner from the Free Software Foundation Europe. In 20 years we have been empowering users to control technology. Our work is based on three pillars, public awareness, policy work and legal work. Free Software, also known as open source software, is used in a lot of places, but people often don't know about it. That's why we spread the word that software freedom means that you can use, study, share and improve it. We run campaigns, give talks and organize information booths all around Europe. We interact with people from diverse backgrounds, often with people who have never heard about free software before. Software freedom is crucial for functional democracy and the distribution of powers. That's why we help politicians and public administrations to understand why free software is important for democratic society. We give feedback to public consultations and answer requests by politicians and public servants. With our public money, public code framework, we provide materials which are successfully used by individuals and organizations to explain free software to politicians. Free software licenses build a solid basis for collaborative software development and ensure that developers work stays free in future. Legal aspects of free software can sometimes seem complex. That's why we provide easy understandable FAQs and guidelines. We also facilitate a legal network where we bring people together to share and exchange knowledge about free software legal and licensing issues. We have come a long way since 2001 and build an amazing community. As a charity, we depend on your support to foster software freedom in the next 20 years, either by donating or by contributing as a volunteer. My first teacher once wrote down an African saying for me. Many small people in many small places do many small things that will change the face of the world. Please join us on the long path to software freedom and contribute one small thing so we together can change the face of the world. Welcome to PEP. PEP stands for Pretty Easy Privacy. The PEP Foundation is a non-profit advocating for the right to privacy and right to freedom of information. Our motivation is to turn the cypherpunk movement's dream that everyone should be able to protect their privacy using technical tools into reality. Make end-to-end encryption easy and automatic for everyone everywhere was the goal when we created PEP. We wanted to do for privacy and security what Skype had done for voiceover IP, take a complex technology and make it easy to use for everyone. We designed a solution that works for any transport so it works for email, chat, IoT, financial messages and so on. All our solutions offer fully automated and easy to use peer-to-peer end-to-end encryption in order to defend digital security and preserve privacy by default. We support free and open-source software projects and all of the code we developed such as PEP's Core Technology, SecorioPGP and NUNET are available as free and open-source software on by the PEP Foundation. PEP Security owns the commercial rights to the software owned by the PEP Foundation. You can find the software and the source code at PEP.software and interact with us at PEP.community. PEP is currently available on the following platforms Thunderbird, Outlook, Android and iOS and it is as easy to use as end-to-end encryption in signal for messaging. It also exists as a solution for financial messaging. Thank you for your interest. Please reach out with questions and to establish some privacy in your email. We look forward to engaging with you. European Digital Rights or EDRI for short is Europe's biggest network defending fundamental rights and freedoms online. We're a collective of 44 NGOs, activists, academics and experts advancing digital rights in Europe and beyond. As part of our work to define the acceptable limits of artificial intelligence in a democratic society, we've been investigating and researching uses of biometric technologies such as facial recognition that are already in place in Europe right now. To give just a few of the many, many examples of where this is happening, in Italy, EDRI member Hermi Center has been collaborating with journalists to reveal the fact that these deployments have happened in the absence of safeguards and without a proper basis in law. In Greece, EDRI member Homo Digitalis has pointed to the fact that the European Commission has been experimenting with so-called line detector tests against people on the move at Europe's borders and another EDRI member La Quadrature du Net has been litigating fiercely against authorities, in particular one that deployed facial recognition technologies against young people in schools and they won the case because the use was found to not be proportionate or necessary under EU human rights laws. Part of our work to do this has been in joining together across civil society, across Europe to launch Reclaim Your Face which is our new pan-European campaign to ban biometric mass surveillance. Through this we're calling on European companies and European leaders to be more transparent about why and how they're deploying biometric technologies in our public spaces. It's a big ask and we're going to have lots of exciting actions coming in 2021 which is where we really need your help. So if you go to reclaimyourface.eu right now you can sign up to stay informed and you can learn about all the local, national and European actions that will be coming up to make sure that we can ban biometric mass surveillance and protect everyone's digital rights. And with that from all of us at Edgery we'd like to wish you a great RC3. Hi everyone, my name is Janna. I'm a member of the Digital Society Switzerland, a Swiss NGO dedicated to strengthening everybody's fundamental rights. We're currently around 700 members. We're mostly volunteers like me from all walks of life and backgrounds and interests who want to help achieve a more sustainable, democratic and free public domain in our increasingly digitized world. We inform and counsel individuals and institutions. We carry out risk benefit analysis of emerging technologies also related to human rights issues. We offer services like tour exit notes, software projects and workshops to digital privacy. We want to be a critical voice to public policy and a partner for informed decision makers in the legislative process. We also bring matters to court to strengthen civil society when necessary. We're engaged in sustaining the principles of net neutrality, strengthening data protection and security and are working on modernizing Swiss copyright law. We support open data, freedom of opinion and freedom of expression. So consequently, we counter excessive state surveillance, censorship and network barring. We're organized in different working groups. If any of our topics sparked your interest, make sure to subscribe to our newsletter or get in touch. You will find the contact details on our website. We will connect you with the right group of people. Thanks for watching. 2020 is different, welcome to the remote assembly hall and the about freedom space. I'm Sarah and that's my colleague Lea. We're both lawyers at GFF, Gesellschaft für Freiheitsrechte. Let's get right to it. Lea, what is GFF and what do we do? GFF is a Berlin based human rights organization and we focus on strategic litigation. Which makes us a little bit special. Strategic litigation means that we use legal means, generally lawsuits and litigation to strengthen human rights and to ultimately push for social and political change. Can you give us an example? I can imagine many people here know us from our work on digital rights. So maybe you could give an example of a case you've worked on this year beyond that field. A subject that I've worked on a lot this year is the discrimination of queer families. So in the traditional example, if a woman gives birth to a child, she's recognized as the first parent. And then if she's married to a man, this man is automatically recognized as the second parent and this is regardless of whether or not he's actually the biological parent. So if the couple uses sperm donor, the husband is nonetheless recognized as the second parent. But everything changes if that woman is married to a woman, a non-binary person or a trans man. In all those cases, the child only has one parent and the second parent is not legally recognized. And the only way to change this is for the couple to go through an adoption procedure. And that's a lengthy procedure which takes time and it means that a government agency comes to the family and may ask questions about their finances, their health or their relationship to the child. That seems like a clear gender-based discrimination. So as we're a strategic litigation organization, let me guess, you're taking it to court. Yes, we've taken it to court together with two wonderful families and two very good lawyers. We are bringing this before instance courts right now and we're hoping to bring this before the constitutional court. And if we get a positive decision by the constitutional court, then that'll change not only the life of the claimant, but the life of many rainbow families out there. That sounds great. So is strategic litigation always about winning a case or is there more to it? I think there's much more to it than just winning the individual case because the underlying issue is much greater and it's the issue of discrimination of queer people in general. So in our casework at GFF, we try to really look at three aspects in order to make a good strategic litigation. In number one, I think it's very important to make the voices of those heard that are actually affected. So in our cases, the two queer families that we're fighting with, they have talked to media and they have had their story heard and impact and touched many people out there. And then secondly, strategic litigation is only one way of going about the bigger idea of challenging society norms, for example. So we partner up with civil society organizations, in this case, major LGBTQI organizations and initiatives and they work on these subjects in different ways, much more broadly, but also just with different methods. And then thirdly, strategic litigation takes time and we're prepared to stick with it. To continue this important work, we need your support. So join us in our fight for human rights and become a supporting member. Hey, I'm Lisa and I'm a freelancer. I wanted to finally make it clear that it's not just about free access to free internet, but about much more. We build a free and self-proclaimed network that can be connected to all the people in my environment and that can be easily turned off by anyone. My freelancer, Uta, is running with a free software. He connects with other freelancers from window-to-dash to window-to-dash, from roof to roof. From many small radio connections, such a large decentralized mesh network is created, in which the data is further enriched from node to node. But I also gave away a part of my internet connection, which I don't have available on the other free-link network. An internet tunnel ensures that the data is converted and I don't have to worry about the stupid stuff. Do it like me, so that we have an open and free network, in which we can communicate and share data through our own network. Go to the free-link meeting, founded a group, talk to other freelancers, be there and open it. We are a humanitarian aid organization that has been around since 2014. We are mainly active in northern Syria, currently operating a field field house in the Camp Albul. Hello, my name is Teigrit. Right now I am in the north-east of Syria. Here in Malhol Camp we build together our local partner a field field house. In order to provide every of the 75,000 people who live here to provide medical care, we build a rescue structure for the camp near the field house. My job here is to provide the employees of the ambulances and the management to recognize the medical needs and to teach them about the life-saving immediate consequences of school. My name is Vanessa. I am the workshop manager of CARDOS and we are here in the Crisis Response Makerspace. Here we realize our different ideas and concepts for humanitarian aid. Here we realize a fire pit for medical cases. The second one is a use toilet. In addition, we build a robust and repairable vital parameter monitoring for the use. All of our projects are open source and are free to access. In our workshop we have different work areas, for example a work area for metal work, a work area for tableware work, an electrical engineering area and a CNC. The TIF was founded in 1984, more than 35 years ago, from a historical situation in which it is possible to break the silence of the silence, which was largely involved in the development of automated and informatized military service. The founding members made a double-close, open resistance. They wanted to use the information and communication technology, especially as a means of understanding the people. We want the information technology to be in the service of a life-threatening world. That is why we warn the public about developments in our area of expertise that we are concerned about. That is why we put our own ideas into the possible dangers. That is why we fight against the use of information technology for control and surveillance. That is why we engage in military applications for a reformation of the informatics. That is why we theme diversity and barrier freedom for the computer design and use. Since 1988, we have published four times a year our scientific journal, The FIFT Communication. We have been organizing the FIFT Con and are creating a room for you to discuss and discuss current topics from different perspectives. Since 2010, we have been giving the FIFT Student Prize for the outstanding work of the information and communication sector. We also won the Weizenbaum Medal in Wolfgang Koi for the first time in 2018. With the Witz-Wolme-Conference 2018, we provided support for the techies with the OECOS, the Witz-Wolme-Conference and the Witz-Wolme-Conference. With open letters and names, we got into the debate. In 2020, the name of the exhibition has particularly been noticed in the end-to-end resolution. Before the development of the Corona-Bahn from the federal government, we published a full data collection of over 100 pages for such an app and set up decisive impulses. We have set up a 2020 for right-wing freedom and press freedom by demanding the scandalous and human-intensive conditions of Johnian Assange. We collect signatures on Assange.fiftee and initiated a weekly, consistent, sportive activity with political content in the area of the British and US American properties. After the 2019 federal police almost all participants had filmed a demo in the Kruhnewald without permission, we now helped the federal police complain. Under the motto, video clips against video surveillance, there was an advanced calendar with the finest content from the French management Kruhnewald under videoclage.fiftee. We have been a part of the concrete CCC for over 10 years. We look forward to meeting you in the 2D world. We have a workshop area on the workshop page on rc3.world. They say, they are talking about themselves. So please read that if you are interested in them. The forum Informatica in Forfrieden and Gesellschaftliche Verantwortung means in English something like forum of computer scientists for peace and social responsibility. They want to focus on the effects that information technology has on society. For that, they do public relations work, they do consulting, they produce text studies, they publish a journal, they do a conference that was shown in the video, they give out a study prize, the Weizenbaum study prize, and they are doing a lawsuit against, I think, the police for video surveying an entire demonstration. I wanted to say something about Freifunk, because the video was in German too. Maybe there were subtitles. Freifunk stands for free and open wireless networks. Freifunk is not just an organization, it's a whole group of communities all over Germany. One of the points they have all together is the networks are open for everyone and the initiatives are non-commercial and there are also some other partner organizations in other countries in Spain, Italy, Argentina and so on. What can you expect from the open infrastructure orbit the next days? We do have a FAR plan, the schedule for our next days you can find at farplan.oero.social. On this FAR plan you will find a variety of talks and workshops we do have in the next days. For example, the next talk that follows here is a talk about face recognition and it's called Reclaim Your Face. We do have workshops, for example, there is a workshop about RFID things and another workshop that analyzes YouTube algorithms with you and you can find out some interesting things. There are other talks about video surveillance, net politics, Kardo's has a talk about some devices they built. I think there are a lot of interesting things you can find under this URL and enjoy it. All the videos are streamed via this channel and they will be recorded and you can also look up them afterwards. Now, how does all this work in our orbit this time? It's a bit different from the last year so obviously there is no audience but our audience, you are hopefully listening to us anywhere in the world. We thought of some ways how you can participate still even under these different circumstances. The first step is finding the talks you want to listen to and the Farplan, as Andy already told you, is on farplan.ou.social so there you can look at the talks and look at the descriptions and see what talks you want to listen to. The second step is to be in the audience. We have the BIMAS setup that we already mentioned. Please join the JITZI, I think it's the JITZI room under that link so that you can be displayed in this room and be seen by the speakers so they don't feel that alone standing here on the stage because it's really at many people in this hall and I think it's much easier for speakers to have their presentations if they see some visual feedback of people actually listening. So that's something what you can do and how you can participate in it and help speakers make it feel more like an actual conference. The third step is if you have a question, of course you want to ask that question and you can. So we have three ways of contacting us. There's an ISEE channel, it's called hashtag RC3-Oyo at the hack and RSEE server. And on Twitter and Macedon you can contact send us questions over the hashtag RC3-Oyo. And these communications channels are monitored by people here with us and the questions are displayed for the Q&A sessions after all the talks so that's the way how you can ask questions and then these questions will be answered by the speakers. Step four, after the talk in a normal conference there would be some people going to the speaker asking them questions we don't want to miss out on that too so there's a way to join a discussion round with the speaker after the talk which is hosted on discussion.rc3.oyo.social where you can join that room after a talk and ask some additional questions to the speaker and yeah, have a conversation. That's it for the talks. For the workshops there's also a URL where you can find the workshop space which is unfortunately only for people who have tickets because it's hosted in a big group button room which is hosted in the RC3 world so you need access to that to get into the workshops. You find the workshops also in the far plan and if you find a workshop that looks interesting to you just drop by and go to that URL and you will be guided to the right workshop room. That's basically all we have to say. We wish you a happy and nice and eventful congress. Stay safe, stay home, and see you digitally in the RC3 world. Thank you.
|
Wir eröffnen unseren Space.
|
10.5446/52280 (DOI)
|
Moved is an open source enthusiast and networking, mesh networking enthusiast. He was working in the fly-from community when he realized that mesh networking protocols are not as efficient as they could be and that they limit how many nodes can be part of the network. Therefore, he started to simulate mesh networking protocols and today he will talk about that. If you're listening to this and think maybe I can be more part of the audience, we invite you to our virtual audience, we have a beamer here so you can watch the talk and the speaker can see your face and your reactions to that. That is available in a jigsie room under audience.rc3.oio.social. Please join there and there's nothing more to say for me. We're really glad to have you here. Thank you, Victor. Okay, hello everyone. This is my talk about how I try to emulate huge mesh networks for the purpose of making better ones and of course start with taking what's already there, putting it into some tests and testing it. And yes, so let's start. I mean, okay. Yeah, my name is Moved as I have been already told. My family name is very funny and very confusing, which is awesome. And I'm a long time free and liberal open source programmer so I do a lot of software related to distributed networks, mashing, security even and most of my stuff you can find on GitHub. And I've been with the fry phone community since 2011, 2012 around that time. And I found it awesome to flash, put them out there, let's create a big huge mesh network, at least that was my dream and of course to connect people so they can talk to each other, without relying on other proprietary infrastructure from Vodafone, Telecom and the likes or even to get even internet to refugee homes, stuff like that. So yeah, so that was my idea. I mostly, this is idea of making big, huge networks that connect everyone but are still distributed and not centralized. And over time I learned that, well, we can get to a few hundred nodes in our community but then it gets problematic since we are using Wi-Fi a lot and we also, these mesh networking protocols that we use, they're quite, well, they're okay, they're good in comparison but they're still a bit inefficient for what we try to achieve which is much, much harder than compared to what you want to achieve when you have a data center or stuff like that where you have gigabit links and not like PASCII or horrible Wi-Fi links that break down every few minutes if you're really unlucky. So yeah, in general I'm a mesh routing enthusiast and I think we need better protocols. So I set out on this journey to create better ones to test the protocols. So of course this got me drawn to other people with similar goals and this brought me to the battle mesh. For those who don't know, the battle mesh is a yearly conference in Europe. Sometimes it's in, it was in Leipzig, it was in Vienna, it was in, well, different European countries, Slovenia I remember and there are a lot of like 40, 50 people maybe and they bring a few routers and then they do the BAPL where they put their favorite mesh routing protocol on these routers and then they do tests, throughput and stuff and well, yeah, then they see in the end after some luck, they have some graph as they can say, yeah, this one's better than the other one but that's it and of course it takes a lot of time to set this up then of course when you change just different routing protocol then you, yeah, might be unlucky and somebody is using a microwave and this influences the results very badly and also my personal perspective is that I really want to have like something that is more efficient and more scalable so throughput is maybe not the most important concern but it's connected since if you don't have much throughput then scalability will be a bit harder. So yeah, so this is not really what I wanted to test, I mean it's still interesting but yeah, and scalability it's a bit hard to get a few thousand routers to this event and I mean it's just too costly, just too much work. So with the corona of course every thing became virtual and I thought okay let's do a virtual battle, I mean it fits, I mean I was already creating some software to do that for myself and I thought okay let's do a virtual battle, this has of course drawbacks that you don't really have real hardware, real Wi-Fi interference patterns and stuff like that but of course I thought okay let's keep it simple, what I can achieve right now, at least throw everything out like if you're in a balloon and you're going down then you throw everything out until the balloon flies so that was basically what I was doing. So I wrote myself a tool first to do this virtual mesh route, virtual networks where I can run like Wi-Fi protocols on each node, it's like you have a Fritz box or some Wi-Fi router, you put some software on it and it has ability to send some packets and then the packets will be transmitted and received by other nodes in reach and then processed and in this way they need to organize themselves so you can reach everybody, without much delay, without dropping too much packages and stuff. So what I did was, some tool it is called meshnet lab, it doesn't really matter, there's some other software out there, some is very similar but mostly they do containers, when you think of Kubernetes and stuff, since I only had a very small laptop, I don't have much resources but those are planned to have like emulating at least a few hundred nodes, this was not an option, I don't have that much RAM or CPUs or servers, so I thought okay let's throw everything out, especially containers. So what I did, I was using Linux network namespaces which is really awesome, it's like one of the building blocks of containers on Linux but if I'm willing to throw a lot of stuff I can only use this one and script everything with Python and use IP commands, ping, SSH, stuff like that, SSH for running it on two laptops or even more and it's on the internet, the link will be in the end but I think it's important that it's a CC0 license, you can do everything you like with it. So just give me maybe a little bit of an introduction to Linux network namespaces which is the core of what I do, yes I've told you it's a building block for AXE, Docker and so on, and of course on Linux you have other namespaces like file namespaces, stuff like that to do some kind of virtualization but I threw everything out and just use the network namespaces, so you can already do that if you have a recent Linux kernel with this IP command, you can use this IP net NS which stands for network namespaces and you can add some namespace, give it a name like in the slide here, we create a namespace called foo which has its own network namespace and then you can list it and then you can execute arbitrary commands in there. Of course if you do something like LS to list all the files since this is a network namespace you only see what you have on your disk, so it's not encapsulated on this file system level but just different networks there. So if I do like in this namespaces IPA like list all interfaces and addresses, I will see only this local host interface which isn't the one I usually see, so this is a different one. And then you can go next step, you can create virtual cables which you see by having two interfaces, so if you stick in one interface one packet it just comes out on the other end of this cable on the other interface and then you can put these interfaces into these namespaces and connect these virtual nodes basically. So yeah, okay here are some commands if you want to try it at home, it's not dangerous, it's fun, where you create a cable and then you can stick it in different namespaces and then you see okay we have this interface and this one namespace and the other namespace and of course you can start some arbitrary network program in this network namespace with this IP netns exec and then the name of the namespace and then just the command where you start your program and it will just see these interfaces and have this different TCP IP, this network stack. So this is just the building block of what I fused. So and this is like very efficient, so I was able to emulate a lot of nodes and how much we will see soon. Yeah, and this whole meshnet lab, my little program consists of a few pricing scripts, one that takes a JSON file and sets everything up in a way, so I have this network namespace, every network namespace is one node and these are connected by table and I can have JSON file where I define okay this is the cable with 100 embed, some packet loss, stuff like that and connect these and then I have some magic there which I will show you on the next slide that will show you a bit how I managed to make it more Wi-Fi like. So every node will have one interface, you send some pink, some packet there and it will arrive in other namespaces. Not just one but multiple, it just depends how I define my network via this JSON file. And then I can start like for example, Batman advanced which is a common routing protocol for free-funk networks in there and everyone and yeah, they will see each other and do mashing and then I can do pings and stuff like that. And something that is, I should note is that this is an emulation, it's not a discrete event simulation so that means just by throwing more CPU power on it, it won't run faster. Also I have this problem where when I send a lot of packets then it might influence random things in other parts of my virtual network. So this is really my bad for emulating or testing through portals. But if you keep your traffic low to certain amounts then you can get very, very big setups. So bit of the internals before we go to the results. Then yeah, basically you have this namespace, so let's say you have like a network with node A, B and C and B is connected to A and C and then internally I have this setup this way where we have like three namespaces. Every namespaces have an interface in it so every application you start in this network with namespaces will only see its own interface and this is one connected to the other end to the two different namespace which is basically my VEDROP for all the cables which I just called switch which I have stuffed full of bridges and bridges that are connected to each other according to the JSON file or according to how I want these nodes to be connected and what I did is that I dumped down those bridges to be hubs. So hubs are like, well as a network engineer you might still have a hub somewhere, probably not anymore but like ten years ago it was still valuable if you wanted to do like packet inspection because a hub when you send a packet, I mean it's like a switch but if you send a packet on one port it will come out on every other port. So this is like a dump switch that doesn't really remember where to send a packet to if it arrived. So you can like put in two devices and they can interact and on all the other ports you can like listen in on all the packets but nowadays you have switches where you can just configure them so they will do the same thing but like ten years ago these like old devices were still gold if you had wanted to listen in on traffic. So this is what I did so since I wanted to have something like a broadcast domain where you just send some packets that might not even be a broadcast packet but maybe just a ping or so but this packet will still be received on all neighboring nodes I mean in this topology. So if B sends out ping on its uplink interface it will be received on the uplink interface on network namespaces A and C. So this is basically the magic source I've used here and the switch namespace, yeah I've used this one so I don't pollute my usual network primary namespace so if I do on my console just IPA I don't have like thousands of bridges listed there which could be really messy. Okay so let's get started with the tests and so I've already told you that measuring throughput isn't a really good idea with these kinds of tests but what I wanted to do first of course is to benchmark how many packets can I get through until everything gets screwed so maybe packets getting dropped for no apparent reason and of course convergence is something that I can test which means that in the network every node, every state of this routing protocol instances they know about all other nodes if they don't do anything stupid I mean and so if I change something then it needs to converge again so every node has like a coherent view on the network and can route according to its routing protocol. So if I change a lot this is called mobility so you can think of like bifuratos on cars I don't know or just say get turned on and off and connect or connect to other devices and of course my favorite topic is scalability so this is really what I want to test here and also try to test usually IPv6 not all routing protocols have a working implementation for IPv6 but well I tried and of course some limitations I've already told you is real time so it might take some time I can't just throw faster processes on it but it's of course helpful and I have to be very careful for I mean performance issues like I owe limitations to influence the results that's something I want to avoid of course and I also can't really emulate Vifor interference patterns where you have a Viforata it sends something but some other nodes that was previously unseen sends at the same time and then it both trashes the packet in air stuff like that I don't have that but well it's not impossible to do let's see there are other projects out there that might do that very well I will have to look and of course since it's not real time the testing duration can be quite long so I have some tests that run under an hour but also tests that I mean some of the slides I will show you they took like one two weeks to produce so it was around the clock the CPU was like on mostly on like 5% idling and then the networks got bigger and bigger and you test and then the CPU got quite stressed at the end and that's where of course you have to be very wary with your results if they're really like you're showing what what you hope and not some interference with the CPU or IO controller so the first thing I do is benchmarking of course but I will have a slide there in a few minutes and one of the tests I have to have to do in like yeah this is a bit was a bit annoying because Batman advanced for those who come from the Freifornt community it's a working protocol that's used a lot there but it's also implemented as a kernel module and they use like a single threaded primitive there that it couldn't get rid of so what I had to do for for this was to get a server with like a lot of CPUs and for every CPU or two CPUs I use the virtual machine and run my simulation there and connected all this virtual machines over tunnels and yeah but I got it working it was a bit bit hard to do and I had a lot of help so thank you for that and so I have some results there as well that are comparable and okay now of course to the routing protocols I actually tried to test it successfully where for example ICTRASIL which is mostly a spanning tree protocol with cryptography so all IP addresses there are like derived from a secret key or public key at least there's a cryptographic key so you can't re-choose your IP address and it has spanning tree architecture which tries to make a spanning tree out of everything like OSPF maybe but it's more mesh like but it's mostly used for over the internet connections it's interesting and Batman advanced I've already told you it's used mostly by Freifunk communities as far as I know yeah it's for really mobile mesh networks and then there's Babel which is also for these cases but also focused on over the internet and you couldn't push out routes so it's more you can integrate it very well in professional setups when you are a network engineer stuff like that and working on a ISP and then there's OSR1 which is also previously heavily used by the Freifunk community which only really supports IPv4 I tried IPv6 support but it was broken but there's a newer version OSR2 which worked quite well in that regard and then there's of course BNX6 and BNX7 which are like descendants of Batman advanced but this is in user space and they have a different protocol and then there's CGDNS which is a bit old also like comparable comparable to ICDRAZIL, CGDNS is like the kind of a predecessor of ICDRAZIL I would say and then I tried also OSPF which I didn't get to work honestly maybe someone can help me with that because it's a bit tricky because I don't have like a I can't manually configure all the nodes I mean it's meant to be a ad hoc mesh network so every node wakes up, starts up and sees packets and coming and needs to figure out where it is, who are the neighbors and stuff like that and OSPF is mostly well it's a bit hard to configure that way so if somebody can drop me a hint that would be awesome so let's get to some actual results so I've told you first we need to do some benchmarking to see how much nodes can we run before we get into trouble so I said okay I tried on a laptop on a server and what I did was I created some network I think this is like a grid of nodes and I tried with different kinds of nodes and did some pings and then I saw okay how many packets of these pings arrive at randomly selected pairs of nodes so I've selected some random pair of nodes that are not neighbors and of course not itself and then I sent a ping and said okay if it arrives then all is good and of course some of course doesn't arrive so then I get some percentage of the arrival there and you can see that when I get to my laptop this one was my old one when I get to like 120 nodes then one of the routing protocols in this case Batman advanced experience packets to be dropped so I knew okay so much I can go so far if I don't do much traffic and to be safe on the laptop I would go maybe with 100 nodes tops and for the server it has the more beefier CPU it was the old one I think from 2012 quite old but back then top notch I got up to 250 nodes so I had a good like ballpark where I can scale this up so my setup also I can distribute over different computers so that was very helpful and a bit to my measurements I mean I've already told you how I measure this packet arrival in percentage so 100% means every ping arrived usually I do like I don't know 200 pings and when half of them arrives then I say okay it's 50% arrival and of course I pick random pairs and these are not meant to be neighbors and of course paths has to in theory exist and yeah but I didn't measure throughput and I didn't try to press that because that would harm my results okay and yeah I also didn't do much packet loss because yeah I started with this setup and maybe at a later time I want to introduce something like jitter that you usually see with Wi-Fi packet loss and stuff like that but for now every test you will see here a result is based on an assumption that I have like a hundred megabit things between the nodes with one millisecond delay of course unless stated otherwise okay so for convergence so I've already told you convergence is measured when I well in this case when I change then something changes and then I measure how the connectivity changes I mean how many of these pings arrive and in this case what I did was I set up this whole structure with this namespaces then I started like at time zero here on the x-axis all the nodes also all the protocol software I mean the same one of course in every namespace and then I did and after two seconds wait I did pings and I saw for example here let's let's say use Batman here which is this light blue line so I did this every two seconds I think and we see up until maybe 27 seconds after start none of these pings arrived but then it got to 100% very quickly so and this could be explained someone from the developer team explained it to me so I mean this is not like something like bad so it's just yeah timings and just starting up the Batman advance instance and of course all the instance here so and testing was basically I started so created this network then I started all the clients waited two seconds measured the pings and then draw the point and then I did everything all over again with the exception that I know waited four seconds then I teared down everything again started up everything again then waited six seconds so that takes a long time but in the end I got this draft and it's not really that important how fast these protocols go up here but it's still some interesting thing that shows you some implementation some timings that are part of the source code part of the default configuration so this is not like saying okay this protocol is very lazy or very bad or slow of course you could maybe do it better but in practice it doesn't really matter because this is just like start up time of course at point zero all the instances were already started or just started but then I at this point I got like counting the seconds and one oddity that I found here and it was also reproducible was the CGDNS I don't know what they do at 30 seconds but the convergence goes down so not many pings arrived here I don't know why but this is well it's interesting to see and of course all this what I test was on a grid structure so I have like notes like yeah on a chessboard and they have like four neighbors yeah okay but of course I did the same test not only on the grid but also and on a what I call a random tree it's like a tree structure but it's not balanced I have a picture of it in later slides but it's still I think interesting okay so we have this on it on a random tree which is basically the same result then I did it on a line which is pathological pathological because usually in reality you don't see like mesh networks that are just like a like a line one note and the next line and it only sees it to neighbors and then you have like yeah got up to like 60 nodes so usually you don't have that in reality but it was still fun to try out and you see for example bet advanced and other protocols have a lot of problems there even after like 60 seconds oh no I said 60 nodes but it's 50 nodes that's what it says here in the small font so that must be true and yeah and you see that bubble for example sorry that advance doesn't go up to 100% which is understandable when you see that the matrix that bubble you that Batman advanced uses doesn't allow so much hops anyway so it can't get up to 100% so it's just some interesting fact but in reality you don't have that many hops in your networks network so if you have like 50 nodes I think on a grid I think this is actually 49 like 7 times 7 and then you have like 7 square roots yeah around maybe 7 hops and so but some of these can take a longer distance and yeah that's where they get dropped okay so yeah this is basically some results here nothing really interesting now I think it hope that we'll get interested more interesting and there I have to change actually because it doesn't show the animation in the presentation mode and here I tested the mobility and you see this graph I have slides like this coming next and what you have to if you don't if you're too lazy to look at all this grass then look at the spa chart so this is just like the the summation of what you see in the graph above and what we see here is on the right the animation how the network like moves what I did here was in the station JSON file I gave every coordinate like a GPS coordinate random one and then I said okay off when two nodes are in a distance of a few hundred meter or 150 meter I don't really know then I make a connection and then every like two seconds I move these notes into this virtual space randomly and then I see which nodes are near each other and if they're in range then I make this connection so is this so it's just a connection or no connection and what I see here is okay there's not much mobility not much changes here but we can see that at least three protocols have a few problems here because they're well they're not really like optimized for mobility but everything else is like yeah 100% of all things arrived and now let's get to the next slide maybe you can do it that way no didn't work no that was new slide and I think I need to start the presentation and you because it has created a new presentation okay why not it's something I can do yeah recover why not okay I can do it I think I want to go there because here we have much more mobility and you see these all other protocols except these three are also having lots lots more problems I can see where it's very co-attack here but still most of the mesh running protocols that are meant for well optimized for mobility are holding out very well and every well nearly every packet arrives except for these three protocols let me how can I show this from first slide yep okay yeah see it doesn't show here okay so next slide so what are the results here I mean these three were cgds yctor zil and BMX 7 which are having some problems here well it's not that surprising except maybe for BMX 7 but yeah maybe they can reduce done something we as a configuration but I've always always use like default configurations okay so this is just a difference mobility scenario where I've also measured the traffic and what you see here is um I can't really see it on my screen is that um that way it's just too small on my screen um yeah but um yeah this is like a different high mobility scenario and you see that yctor zil for example creates a lot of traffic but for routing it doesn't do so well um very badly but I've been told by the developer that's improving on that situation and all the other protocols here use a lot of low overhead low low traffic and yeah and Batman advanced and BMX 6 and 7 are doing quite well here okay yeah these are just the results so yeah I guess that's not a surprise so now to my most most favorite result is uh skill ability so I got hold of of a server which is a lot of power and so I was able to simulate up to 2000 nodes for this one I did a thousand just because 2000 nodes in one line is like horrible for for your routing protocols because in reality it doesn't happen um what you what what you can see here is uh that if you go up with a number of nodes that um yeah you know what we have here that uh we reach something like uh yeah um yeah well what we can see is that the traffic goes up linear for most protocols for some for Batman for example it stays the same but also we also see like this dotted lines here they go down very quickly I mean we can see it here which basically means uh that this is on the right side that most of the pings don't arrive so basically our result here is it doesn't really say much to us so since the traffic doesn't really arrive the pings the packet less is too high um we can't really say something is better than a different one they're all like bad on in this scenario but in reality of course you don't have like a thousand nodes in one line okay now on a grid it gets more interesting so uh you see most of the routing protocols they're doing quite well uh with this scenario I got up to 2000 nodes um on a grid and most of the pings arrived and what is most interesting here I mean is that for Batman it goes up and down and that means that the amount of traffic that got in on one single node on average is going down and actually I don't know why but um I hope to find out at some point but you can also see that here is the amount of packets that arrive it's a bit cramped but it's also going down after this uh this high point so this is um well basically means that yeah Batman has some problems here I don't know what it is maybe it's the CPU since I had to do some special things with Batman here but in uh in general I can say that our traffic on a link on average with the size of the network on the grid uh grows linear and for some cases well I don't know what's really happening here there's a lot of questions here but um well I tried to push it by throwing out a lot of stuff and uh sees us like some average traffic results um that are measured there so you can say that OLS R1 is quite doing quite well um I've omitted Batman here because the packet loss was just too high so I couldn't really get like meaningful results um and also did this on a random tree but we also see here a lot of traffic packet loss for a lot of protocols which don't really scale to apparently to this size to these sizes and we also got like this hill structure for Batman Advance which is probably having this problem with the metric yeah and um yeah that's basically how this random tree looks like um yeah it's just not a balanced tree but yeah like a tree but the thing is we don't have any loops uh which of course in reality you never have um usually mesh routing protocols try to avoid loops and in this case they don't have to so I was expecting that they might do well um but of course there are a lot of other factors that you have to take into account and also try this on Freifunk Networks uh topologies I downloaded from the Freifunk community sites um yeah I mean this I've included this slide but the results are not very conclusive so you can't really say that one is better than the others and um yeah and I hope to get my hands on on more hardware so I can do more extensive tests that are more meaningful but these at least are interesting and with this I would like to conclude my talk and um as I've told you um all the results well the project is on GitHub also the results um and there are also scripts you can run like there's a test ordner there's there's a Python script for each test you can run it and then there's a different one that uses glue plot to produce the exact images as I have and you can just try it for yourself and have fun and I hope to can to use this pro uh this uh tool to compare other routing protocols maybe to create my own and see how it goes with other ones before I go to actual hardware and of course I've told you I'm more interested in scalability for now and with this I would like to conclude my talk and thank you for watching and if you have questions now's the time thank you very much. Okay thanks for the presentation we have a question. Okay. I brought my laptop um questions are arriving from the internet. The internet. The question is question about all these nice graphs. Yes. Do you have error bars as well I guess you repeated the measurements more than one is this worth looking at that without liars etc. Yes I did error bars in the beginning but uh since one run of the entire graph uh took like hours uh it was very tiresome in the beginning I did like 10 uh iterations of each uh graph then I noticed some bug then I had to fix it then I tried again and that's uh how the the week's passed and now it works but still some tests like scalability tests it takes like a week maybe I can try of course for one rendering protocol which is much much quicker but if you do 10 iterations for your for example for your aero graph it's just two tedious uh I would like to have one and uh what I can say is that uh the aero graphs show like um not much that much variability so more most of these results results are if you see them they're uh from the credit the quality wise um they're the same if you repeat them. Good thank you. Another question from the internet that just arrived is why did you choose to cook your own protocol testbed? Thank you that's a good question the thing is that uh as I said um said in the beginning is that most uh testbeds I I've uh found on the internet I have a list on my meshnet led meshnet lab uh project page in in the in the bottom on the bottom I have a list of other projects that are doing similar stuff and also I'm using like um images docker images running like that or uh Kubernetes and stuff and then I go up in the problem that I don't have the resources to run all these like multiple Linux kernels um and since I well I use this analogy where I are in an airship or a balloon and I go down so I throw everything out just to get uh to this amount of nodes and I still hope to get like uh useful results so that's what I did and um a few weeks ago I found actually a project a mini net lab um by fire that I have to look further into I thought they they do also like containers which would be too heavy white but somebody else told me no they don't use containers so I might have to look at there uh I think it's also already listed on my project page so yes um I would like to use other ones uh for me of course now I've written my own uh yeah that's reinvent the wheel uh but of course you learn a lot um and yes so I tried um but uh I figured out that it's way over my head with the resource I don't have that much okay thank you um from the internet I don't see anything else right now but I also wrote down questions so one thing I I really I saw is that the CJ DNS protocol in all the build up graphs breaks down again around 20 seconds why is that did you look into that uh actually no um I only knew that it was reproducible and uh that I um could rule out CPU uh effects which I was very uh cautious about that it might ruin my results or then I see basically measure how big uh could my CPU is and then it depends of course on the routing protocol implementation so that's something I like to like to avoid and uh with CJ DNS and other protocols um I sometimes I've asked people that know much more about these protocols and uh why they think this might happen and I got some replies like uh yeah this there's some timer or there's some feature you can turn it off and then you won't see it or it will have a bit different outcome or you can adjust the metric and then of course if you have really big network then of course the metric will be able to to cover uh these distances and because routing protocols is the metric like hop metric says yeah maximum hop count is 30 so then of course after 30 hops uh the packet is is dropped beautifully and um yeah that's that's it because in reality you don't have that crazy people um yes so I hope that answers your question okay cool yeah thanks um another thing I I thought it was interesting that I think you in the mobility graphs you just drop the connections right so when when the connection when the distance is longer than I think 60 meters so so there's no degradation of the Wi-Fi quality and more and more packet loss um do you think that might change something if you had these abilities to to calculate that into your simulation I don't know I would like to to check that um I have it already prepared not the graphs but the test I just have to run it uh but of course the Christmas time is a busy time and um yeah I didn't get I didn't have time yet to test that but yes I have this prepared I just have to run it of course the people can't can run it themselves uh but I have to point out where to like enable it okay yeah and another question that I thought about is so you you talked about um these that's that's for big networks it's not clear how these protocols perform yes what are applications you could imagine or in your dreams if you think like you do this next round of simulations and everything works really well and how you want to okay let's let's be crazy what in what scenario could you help the Fifehund community and build a really big network that was not possible before how could that look like yeah I know I know most Fifehund communities for example in in Germany use Batman Advanced which is very good uh but they max out at 500 to 1000 nodes at that point you can't scale it up more because then the the management traffic just for the nodes to keep say yeah I'm still there it's still like um it's gross too much it's gross in linear which I saw in the graph uh but at some point you just saturate your wi-fi connection and um that's of course bad that's something I would like to solve for example okay this is more like a smaller dream maybe um where we have we can connect entire city and everybody in a city and you can also put in like low bandwidth nodes sorry connections there that would be awesome I mean but you you don't really want to for example watch a youtube video over Lora which basically is one of a few words every seconds you're allowed to I think to send so pretty nice to have a routing protocol that not only scales to these sizes but it's very efficient and doesn't it prevents from misusing like low uh bandwidth connections with youtube videos or stuff like that um that would be awesome and maybe a more crazy dream would be of course to connect everyone in the city with a decentralized network that doesn't have a centralized um authority or so something like that and so um traffic that is sent local is meant for my neighbor only stays local in the sent local so if for example if some three letter agency wants to spy on us help they should get on a van and drive to your home and get out there there's buying equipment but I think it's very bad that every traffic goes through some big box in in Germany maybe or a few for them and you can just do math surveillance there so this is something I would like to to to prevent to give power back to the people so they communicate which is which is other with their neighbors without being like dependent on other things of course you can think of disaster areas where you can apply this or if you get crazy you want to replace the internet with something better I mean since 10 years we got this saying that the internet doesn't scale anymore which is kind of true but also we got a lot of more like CPU and RAM which helps a lot and dig up pipes so we can just uh say yeah the bandwidth the overhead is really low compared to all the traffic YouTube videos and Facebook status messages we send there but yeah for Wi-Fi or for local for cheap devices that don't have that much bandwidth you need a much better protocol so having like a protocol that scales to these big sizes is like really really hard like exceptionally hard so I don't make say that yeah I will be done like next year probably not but I guess it will be some fun right and maybe I can push the boundary a bit so that would be something I dream about cool what a nice outlook I think this this talk we're done here and we were over the time usually we would have a discussion round in a in a I think big little button room where you could discuss with morris but in this case he's also doing two two workshops today in our workshop area so after a short break you will just find him there and he's available this this workshop area is available under workshops dot rc3 dot oyo dot social so you can just join there the times for the workshops is something you can find on the schedule which is also online and with that thank you morris thank you
|
A year ago I set out to test various existing mesh routing protocols on different topologies. Especially emulating huge networks are tempting, since real hardware for this is too expensive (for me). I present the story, the projects [1] and my findings while testing emulated mesh networks of up to 2000 nodes. Protocol implementations tested: * Babel * batman-adv * BMX6 * BMX7 * CJDNS * OLSRd * OLSR2 * OSPF
|
10.5446/52281 (DOI)
|
The future laws on AI in the EU are being shaped right now. And while many voices from the police to governments call for widespread video surveillance, a broad coalition of 14 European organizations calls for the opposite, a ban of facial and other biometric recognition systems in public spaces. The next talk is about these efforts and will be given by five people. Andrea, who is organizing campaigns at ADRI. Eleftherios, co-founder of Greek civil society organization Homo Digitalis. Ella, who is policy and campaigns officer at ADRI. Philip, who is an engineer, artist and activist. And Riccardo, who is a journalist and researcher for the Italian Hermes Center for Transparency. In that talk, they will present the campaign, their goals, and explain what you can do to support it. We're really happy to have you here. So, good morning, good evening, or good day, everyone. We don't know when this is being broadcasted yet. But this is the reclaim your face talk for this year's Congress. And we wanted to start with an imagination exercise. Imagine you're in a public space. What do you do in this public space? You're probably walking, you're on your way to work, or to your house to grab a beer, or you're just walking for the sake of it. You might be meeting a friend to hang around in the park, or some people for a demo. Maybe you're in a public square and you just decided to join some folks for a random concert that spontaneously just started because some folks had their guitars around. The point is that public spaces are spaces for communities. They're a platform for group dialogue and civic action. They're spaces where we can really exercise our freedoms, be it our freedom to gather in an assembly to speak out against injustices, or the freedom to document and record abuses. Public spaces are the areas where we want to be treated fairly and not discriminated against because of how we look, how we walk, what we wear, or who we kiss. Public spaces allow us to decide ourselves how we want to be seen by others and have the autonomy over what actions we take. However, the introduction of biometric mass surveillance into our public spaces is threatening our communities, our freedoms, our autonomy, and the very expectation one has to be treated fairly. When we say biometrics, we mean any type of data that relates to your body or your behavior. Biometric data is sensitive data under EU law. What better idea than to combine this sensitive data with an unlawful practice? Mass surveillance. Mass surveillance is any monitoring, tracking, and otherwise processing of data of individuals or groups in an indiscriminate or randomly targeted manner. Our qualities, behaviors, emotions, characteristics are used against us. Our dignity is under threat. People are objectified, commodified, dehumanized. The use of these technologies, like facial recognition, are manipulative. For example, coercing people into avoiding certain places or events. The problems with biometric mass surveillance are many, from being constantly monitored to being treated by our government like a potential suspect, being discriminated against because you have a certain color, religious accessory, or because you're holding the hand of the wrong partner. Biometric mass surveillance technology, such as live facial recognition, is being deployed in European public spaces every day in secret, with no evidence behind the need of such deployment, with no respect for our right or for our dignity in the public space. This is why the Reclaim Your Face movement is calling for a ban on biometric surveillance in European public spaces. We are a coalition of organizations across Europe, as well as organizations with international reach, such as Article 19, Privacy International Access Now, and European reach, such as EDRI. My name is Andrea and I come from EDRI, European Digital Rights, the Umbrella 44 Digital Rights Organization. Today I'm joined by my colleagues Ella, also from EDRI, Elefterios from Homo Digitalis in Greece, Philip from Sher Foundation in Serbia, and Ricardo from Hermes Center in Italy. In the next hour or so, we will give you an update on how the EU legal landscape is looking like. We will zoom in on the mobilization in Greece, Serbia, and Italy, and we will discuss how you can help out if you're as concerned as we are. I'll hand it in now to Ella, my colleague, who will give an intro into how the EU law landscape is looking like. Thanks Andrea. So, what are things looking like for biometrics in Europe? Well, the EU has actually been developing rules on biometrics since as early as 2004, and parts of the EDRI network have been advocating for just as long to make sure that this is done in a way that's lawful and that respects people's rights. So probably most notably in 2018, the now world famous General Data Protection Regulation, or GDPR, as well as its lesser known counterpart for police purposes called the Data Protection Law Enforcement Directive, both came into force. And these rules explained for really the first time in European law what biometric data actually are, and they also established the principle of a ban on their use. So specifically, this meant that the processing of sensitive biometric data was now forbidden in EU law. But there are a number of really broad exceptions and loopholes to this ban, which has opened the door to deployments, which have really clearly violated people's rights and freedoms. And it's been a really similar story in non-EU European countries like Serbia too. So in February or March of 2021, we're expecting that the European Commission will propose a new and potentially quite ground breaking law for how the EU will regulate artificial intelligence. This law will likely have ramifications on other European countries and probably the rest of the world in terms of the standards that it sets. And it's likely to include rules on the use of what the Commission calls remote facial recognition. But we call that a form of biometric mass surveillance, and there are many other forms of biometric mass surveillance out there too. Over a year ago, there was actually a leaked draft of a paper on artificial intelligence, which revealed that at one point the European Commission had actually considered a three to five year ban on some uses of biometric surveillance in public spaces. But ultimately, and very unfortunately, they made a political decision that so-called innovation and profitability were more important. So we're really hoping that this time round, when we get this new proposal for a law early next year, they'll be more aware of their obligations under fundamental rights law. So to quickly give an overview of what's happening in European response and what we're doing about this, well, because our lawmakers and our politicians are not yet taking decisive action to protect European public spaces, democracies, and essential rights and freedoms from biometric mass surveillance, our coalition is stepping up to demand that they do so. We've been raising the alarm about the fact that any use of biometric surveillance technologies to scan everyone in public spaces inherently constitutes a form of mass surveillance. And as a network, we've been following the deployment of these systems in almost every European country. And after investigating the high levels of abuse, the harms posed to individuals, communities, and society, the resistance to this biometric surveillance from people across Europe, and the emerging global examples of people's rights and freedoms being really, severely violated as a result of the use of these technologies, we have decided that enough is enough. Over 20 organizations in the Reclaim Your Face Coalition have been exposing why biometric mass surveillance is so harmful, and they've been providing strong evidence for why we need to ban it. We've focused on the lack of transparency and justifications for existing systems, with, for example, members of CCC in Germany raising freedom of information and data subject access requests for information from authorities and companies. We've focused on the need for clear legal limits with La Quadrature du Net in France successfully litigating against unlawful uses of facial recognition by authorities. And across Europe, we've focused on the shocking absence of respect for human dignity, human autonomy, and human rights. So I'm going to pass over now to three of the brilliant organizations in the Edgering Network that have been resisting biometric mass surveillance in their cities and countries so that they can share more about what's been happening specifically in Greece, Serbia, and Italy. So without further ado, I would like to introduce Elefzeria from Homo Digitalis in Greece. I think Riccardo is the first one to go. But I'm sure reshuffling is not an issue. Riccardo will basically detail the black hole Italy is when it comes to details on the facial recognition used by police and how local municipalities are trying to catch up with the latest and innovative technologies completely disregarding fundamental human rights. Riccardo, you've got the floor. Thank you. In the past three, four years, Italy has seen a slowly but constant introduction of these biometric surveillance technologies ranging from facial recognition to other kind of metadata analysis of video cameras. In this map, you can see three main points of entrance. In the city of Cromo, a facial recognition system was introduced, but later, thanks to the intervention of the data protection authority, the system was stopped because it was deemed illegal. The city of Turin and the city of Udine, they're both trying to introduce biometric surveillance systems. In the case of Udine, they're specifically talking about facial recognition systems. While in the case of Turin, they are talking about a kind of metadata analysis of the video that would allow the police forces to monitor the movements of citizens across the city and to distinguish if someone is a man or a woman and to check and detect what kind of clothes or objects they are carrying on. These are examples of what's going on at a local city level. When we look instead to the Italian police, so at the national level, the scientific police as both in 2017, a facial recognition system that can be used during investigations. Just to give some examples of what's going on, what we've done and what the data protection authority has said, in the case of the city of Cromo, this is a summary of an article you can find on Provis International. We wrote in English, so we can spread it around. Basically, the city of Cromo was approached by Huawei in order to acquire a new innovative system for ensuring that the city was more safe. So basically surveillance as safety measures, but also seen as solidarity between the citizens. From our investigation, from file information requests, we obtained documents that clearly show how the protection and assessment carried out by the city of Cromo was basically a field of approval meaningless because it was done after the system was acquired. It was basically claiming that facial recognition equals video surveillance, which is not. For this reason, the data protection authority stopped the city of Cromo from using this kind of surveillance system, video, facial recognition system. Also, from the documents, we've seen that Huawei, the company involved in this kind of activity, was basically pushing for the introduction of these innovative technologies. In the end, the city of Cromo wasted public money on a system that the data protection authority say it's lacking a legal basis to be used. From the documents that we obtained, thanks to the freedom of information request, we can see the other kind of abilities of the technology that it's being solved by Huawei. In this case, it's a soft or soft of metadata analysis. So we can see that their system, in addition to facial recognition, can also track abandoned objects, loitering, crowd density, face detection, head counting, abnormal speed detection. These are all kind of metadata analysis that criminalize behavior. So basically, if there is a suspicious behavior, which is not clearly defined, this system can send an alert. This kind of metadata analysis falls within the biometric surveillance, the biometric processing of our data. And it's also some video that it's being installed in the city of Turin, as I mentioned before. If we move forward, at the national level, talking about the system acquired by the Italian police, this is a screenshot from the system being used, shown on national television. You can even see the address of the website. Of course, this is called CRI. And it consists of two different aspects. There's a sorry enterprise, which is basically an upgrade to the manual search that they used to do in the database of MAC shots. Previously, they did this, writing down the details of the suspect. In this case, the police can use a facial recognition system to automate the process, make it more fast, a kind of optimization, and can search images during investigations. Given an image from a CCTV camera, during an investigation, they can match this image to see if in the database, there's someone they know and there's a match. As Andrea was mentioning, Italy is a black hole regarding this information. This is the fact that when we ask questions regarding this database, we don't know how many people are in there. The overall number is two million images of Italian citizens and seven million images on foreigners. But when it comes to these foreign citizens, it's not clear if they are citizens from other European countries or migrants and asylum seekers. We know for sure that they are in there because the office database, this is the name of the database used for this facial recognition system, is a database that includes also fingerprints of people. So if you are required to give your fingerprint to the Italian police, you are included in this database. Also, there are lacking evaluation of the algorithm used by the police. Every time you ask information, they basically reframe from giving you any kind of details. In 2018, the Italian National Authority opened an investigation on the second aspect of the facial recognition system, which is the real-time one. So after two years, the investigation is still ongoing and we don't have any kind of details regarding the legality and the possibility for the police to use the real-time system that was acquired in order to monitor public manifestations, public events, which falls within a kind of biometric master balance. But we're still waiting. I now leave the floor to Filip to introduce what's happening in Serbia. Thanks, Ricardo. Hope everyone can hear me. So as Ella said, Serbia is a non-EU country. So we are in kind of hybrid regime at the moment. Media is controlled. Freedom of speech is suppressed. So officially, we are leaning towards the West and the European Union, but we have also strong influence from East and China on, I would say, mutual interests. We are part of the China's Belt and Road project among other 70 countries, mostly third world countries. And we are being probably victims of that tract and soft power influence. So Huawei managed to sell us a huge infrastructure with around 8,000 facial recognition cameras, being deployed either on poles in the streets, police cars, as since a few days ago. And also there will be body cans on our police officers. So since we are not into geopolitics, we don't care if this is Huawei or Siemens or some U.S. companies, cameras. We're just scared that this is completely unlawful. So two years ago, our minister of police said that there will be no significant street entrances or passages between buildings that will not be covered by cameras. We will know from which entrance and building the perpetrator came from which car. So this was kind of shocking to us. We started doing research on this. We were sending a lot of freedom of information access requests. We were being denied, mostly because they are confidential. So we did some awesome open source intelligence research. We found a nice case study about Belgrade and Huawei's website, which was kind of weird because we didn't get any information from our government, what we got it from Chinese website. And as soon as we published it, it was removed from their website, which was kind of interesting. So we analyzed the laws and we were pretty sure that this is unlawful. We also found some other ways to obtain some information like this one, while the cameras were deployed on the streets. And finally, we realized that we have to reach out to the community because we know that the whole system is really bad for the society and it's also awful. So basically in Serbia, we have this data protection law, which is kind of translated as GDPR. So the actual purpose was not defined for this system and the necessity was not confirmed. So basically Belgrade is not unsafe city. We don't have terrorist attacks. We don't have much of this low level crime. So why do we need this kind of system? Also, data protection impact assessment was not approved yet, but we still have this whole system being installed in Belgrade. So what do we do next? We start reaching out to community. This is a small exhibition at the art festival. We put one bench under surveillance and the QR code to our cute little small website, which was just a means to leave some contact information for everyone that was interested in joining this fight. So we basically were promoting it on music festivals, we were on Hockey events, we printed a lot of these stickers to promote the website. At one point it went viral, so actually a lot of people reached back offering help in many ways. So this was a good sign for us. We gathered them in a nice event. We came to do collaborative work in several domains. So basically what we wanted to do, we had a group called.txt where journalists were speed writing everything that our citizens need to know. We had.pdf group, which were legal guys who were analyzing laws and making some strategies for this. We had the.html group, which were web developers and designers creating this website. We also had tech guides that were analyzing patents and other resources. This was one of the blueprints that came to be something like this. So we actually did a really nice map called the architecture of a face recognition system. This is yet about to go out, but we used the understanding of this system and some of its details to help our co-citizens understand the issue. So this is part from our website, which actually explains how this whole system works and why is it so bad for the whole society. So finally we had the guys from the local hot club in Belgrade that helped us deploy the surveillance under surveillance. It's a German free and open source application for mapping cameras. So we deployed this and then we started the hunt for the cameras, as we call it with our community in Belgrade in Serbia. We invited people to recognize the cameras that recognize themselves. So these are the three most common cameras using this system. We set up the Twitter profile, which became instantly popular. People were sending us photos from the streets with GPS coordinates and we actually were able to fill in gradually the whole map of Belgrade, which counts more than 1,000 cameras at the moment. So we launched this crowdsourcing campaign, which was really nice. And this is kind of successful because we have now a strong community supporting us. We did a nice campaign a few weeks ago asking the petition campaign asking the government to ban biometric mass surveillance. And of course, finally, we managed to start a crowdfunding campaign because as you can see on this internal meme, we did one of the main components in our starter pack for anti-bimetric mass surveillance activities is resilience. We mostly do this pro bono. So we started this crowdfunding campaign and it's going really well. So we're hoping to ban biometrics in Belgrade and remove all the cameras. So thank you very much. I will leave it now to Therese. Thank you, Philip. Elif Stereos, if you would like to turn on your camera and Hello. Yes. Thank you. So I had a small issue with my camera before also. I still hope that now you can see me well. Thank you, Philip. Thank you, Andrea. Greetings from Athens, Greece. In the next few minutes, I will present to you my slides about the actions we have taken in Greece. I will speak about the central database of biometric info, about the use of drones at cameras by Hellenic police during demonstrations, and about an important contract of smart gadgets that will enable the Hellenic police to use facial recognition technology during stops and controls in our streets. So without further ado, to begin with, the Hellenic police and the private vendor called Intracom Telecom signed last year a 4 million euro contract. The company will develop and deliver to the Hellenic police smart devices with integrated software enable facing facial recognition and automated fingerprint identification technologies among other functionalities. The devices will be very small in the size of the smartphone and police officers will be able to carry them with to carry these devices with them and use them during massive police stops in order to take a close up photograph of an individual as well as to collect their fingerprints. Then the fingerprints and photographs collected will be immediately compared with data already stored in EU databases for identification purposes. So we decided to take some related actions and to file some action information request to the Hellenic police. The replies we got were very unsatisfying, so we decided to proceed with a complaint before the Hellenic data protection authority in March 2020. It is important to remember that no related legal basis exists for the use of such technologies in Greece, as well as no data protection impact assessments were conducted by the Hellenic police prior to the signature of this smart policing contract of 4 million euros. We were very satisfied to see that the Hellenic data protection authority follow up and in August 2020 they started an official investigation against this contract. Now moving on to the next action I will briefly speak to you about a central biometric database of the Hellenic police. Our police collects all fingerprints of Greek passport holders in this central database. It is again important to underline that based on the EU laws on passports as well as the set case law of the European Court of Justice of the European Union, fingerprints shall be stored in the passport themselves in the very documents we carry in our bags and in our pockets. So the EU law neither prohibits nor allows for national central databases of biometric info to exist. If member states want to proceed with the creation of such databases they have to create their own laws. But also we need to remember that based on EU European data protection laws, processing of biometric information such as fingerprints is allowed where it is strictly necessary subject to appropriate safeguards and authorized by a national law. So we had to take action again and in June 2020 we decided to file two strategic complaints before the Hellenic data protection authorities using data subjects. Two months later the data protection authority replied to our complaint and they started again an official investigation. Now in addition to these two actions I would like to briefly speak to you about the use of drones and other type of surveillance mechanisms by the Hellenic police. Back in April 2020 we filed one front of information access request before the Hellenic Ministry of Citizens Protection because the Hellenic police used drones in order to monitor people's movement during the COVID-19 lockdown measures last Easter. Also a few days ago we filed other two freedom of information access request before the chief of the Hellenic police regarding the use of drones and cameras on sticks in public demonstrations. It is important to mention that back in April 2020 the Hellenic police didn't have a legal basis to use drones or other types of cameras and surveillance in order to monitor individual movements. The adoption of a related presidential degree of the legal base happened a few months ago in September 2020 so with our first front of information access request to the Hellenic Ministry of Citizens Protection we demanded to know what was the legal basis for using these drones in our city centers back in April. While with our second round of front of information access request we asked the Hellenic police to give us access to the related data protection impact assessment that it is obliged to carry as well as to the administrative decisions that the Hellenic police is obliged again to publish based on the applicable laws. And I will now give the floor back to Ella and Andrea to continue their presentation. Thank you, Alastairios. Ella, I think now you will give us some tips on the next steps for our campaign. Absolutely, thank you Andrea and thanks to the real Alastairios this time. Sorry for my mix up earlier, I hope I didn't confuse anyone. So yeah, next steps. How are we continuing to resist biometric mass surveillance in Europe? Well, we're continuing to reveal abusive uses and to contest the implementation of biometric mass surveillance across our public spaces in Europe because we're just not seeing enough action from regulators or data protection authorities. Instead we're seeing law enforcement, governments and private companies really taking advantage of the democratic and legal vacuum that we're in right now. So in 2021 we plan to do even more to engage people across Europe to help us challenge those in power and to seek answers about what's really going on. This is going to include everything from formal challenges and requests for information from authorities like with freedom of information requests, also through to public workshops and even partnerships with artists. Our big ticket item for 2021 is going to be a European Citizens Initiative calling to ban biometric mass surveillance practices in the EU. This initiative, which is also known as an ECI, is a form of legally recognized petition which will call on the European Commission to take concrete legislative action. We have very clearly told them that right now the lack of specific EU laws to limit uses of biometrics and the problems of enforcing the existing general principles have meant that they are in violation of their obligations under the EU Charter of Fundamental Rights and that causes so many harms in the ways that my colleagues have just explained in just three of the countries where this is happening systematically across Europe. And so soon we are going to need over one million European nationals to sign our ECI to say that they agree with our call to ban biometric mass surveillance practices. So we really hope that we can count on you for this. The ECI is also going to give us a platform to extend the coalition that we have been building, bridging across issues of labor rights, media freedoms, social and racial justice, women's rights, LGBTQI rights, environment and more. Because after all, biometric mass surveillance is an issue that can cause very serious harm to certain groups. But of course it can also have a chilling effect on absolutely everyone because it really goes against that fundamental right that we all have to respect for our private life. Andrea, I'd like to invite you to let everyone know how they can join us and be a part of the Reclaim Your Face movement. Thank you, Anna. So biometric data includes data about our body and behavior and that means everything from fingerprint, palm print, palm veins, face recognition of course, DNA, hand geometry, iris recognition, retina recognition, typing rhythm, walking manner, voice and much more. Companies and governments in Europe are innovating and trying to find new ways to capture your identity. Yet under EU data protection law, this data is especially sensitive. They're linked to our identities and can be used to infer protected and intimate information about who we are, our health and more. If you think this is problematic and if you're worried about all the information my colleagues have mentioned so far, well get involved. Over 11,000 people have already wrote to their governments. You can go on the ReclaimYourFace.eu website and sign a petition to address your government. From February though, we're mobilizing to address the EU. So two things. What can you do now and what can you do in February 2021? As you've just heard, we're focusing on gathering evidence and gathering support. So right now, for starters, you can write an email to your mayor or to a city counselor in your own city. Ask them to promise that they will not deploy biometric mass surveillance in your city's public spaces. Do they reply? They're positive about it? Get in touch with us and let us know. We'll celebrate together. In February 2021, we will launch the European Citizen Initiative that my colleague Elam mentioned. In order for this to be really powerful, we need to reach one million signatures. Then the European Commission will be obliged to respond to us. This means we'll need a lot of help. Do you want to get involved in gathering these signatures? Again, get in touch. Finally, what if you're an organization? What if you know an organization that might be interested in joining our coalition? We're looking in particular, like Elam mentioned, for organizations that cover a broad range of areas for media freedom, freedom of assembly, disability rights, disability rights, sex workers rights, labor unions, and more. Let us know. Put us in touch. And let us build together a civil society movement that will make your grandchildren pretty proud of you. Thank you. I would like to invite my colleagues to turn on their cameras. You can get in touch with us through our Twitter handles, of course, or to writing directly at info at reclaimyourface.eu. I'm seeing them pointing in all directions. We're looking forward to your questions. And see you around. Thanks a lot for the overview over everything that's happening in Europe right now. I was really interested what made you start the campaign or what was the point where you thought, now this has to change. We have to do this campaign and bring organizations together and stop this. Jen, thanks Rick. Hi everyone. And on the point about experimenting with the Q&A, we are very happy and comfortable experimenting with Q&A. We are not happy with experimenting with people's faces and bodies and public spaces. So that's kind of what's driven us to launch this campaign. From a policy and legal point of view, we've been following developments across Europe for many, many years across the Edgerton network. And we were noticing a trend in the number of really harmful uses just going up and up. We were seeing pretty much every police force in Europe starting to trial these technologies in a really sketchy way. We were seeing private companies rolling them out in supermarket, concert stadiums, football stadiums, without any public debate on what this means, without properly considering the risk, without knowing what it means to even law. And then when there was a rumours that the European Commission was considering a some sort of ban on five-year, three- to five-year moratorium on some of these technologies in public spaces, that gave us a lot of hope as civil society and as activists. And it never materialized. And we realized that civil society aren't really being listened to and the public aren't really being listened to when it comes to these biometric technologies. But as I've noticed when I've spoken on panels alongside CEOs from surveillance tech companies, they are the ones that are being listened to by the EU. They're the ones that are getting millions of euros to essentially experiment with our status and bodies, with our public spaces. So we decided enough is enough and we need to do something about this. And we need to make sure that our European values, our democratic principles, are actually adhered to. It's really, really important to us that this is something that is considered and not something that's just done. And a few years down the line, we hear, well, it's everywhere. It's too late. It's not too late. This is really the time for us to take decisive action. But yeah, Andrea, am I missing? Yeah, no, I think that was a great overview. And if I may chip in some thoughts, we started working on this already, I think, more than a year ago, right? So when the corona crisis and all the measures to stop the spread of it were not even in our worst dreams. But I feel like in the past months, the general public has grown more and more aware of where can surveillance measures lead to? And how do they look like and see them above their heads when they go out in their cities? And I feel like this awareness is also a very, very kind of big push for this campaign to move forward. And like Ella mentioned, when you see that people are outraged about this and when you know that the EU is making new laws on the topic, there is no way we can stay quiet. So I feel like, yep, I'm sure and I hope not, but I have a persuasion that more and more reasons we carry this fight for world war pere. So we better start mobilizing. Ella, you mentioned that in many cases there's no real debate and people just start implementing systems and are listening to the company bosses of big companies instead of the general public. What do you think is the reason for that? So why are we missing out on these really important debates? What do we need to do to have these debates before systems are introduced? There are so many reasons really why it's happened this way. A lot of the pushback, shall we say, that we hear from the European Commission, for example, are things that just aren't true. So there's a lot of myths out there. People say, oh, well, you can unlock your phone with your fingerprint. So it's too late. And actually, that's really not the same thing as suddenly every time you leave your house to go to a protest, to walk in a park, to do nothing as if you're right. Suddenly, you're being tracked and not only tracked, but that's etched into your identity in a really immutable way that's going to follow you around and connect you to lots of people. It's absolutely not the same thing as unlocking a phone. So we hear these arguments that aren't really true. And we face the security argument a lot. And we have governments in particular using that to justify doing unlawful activities. And actually, if you look at the EU law that we have, there are, of course, provisions that allow governments and police forces to protect their citizens. That's a legitimate policy goal to keep people safe. It's just that there are control on the way that they're allowed to do this so that we can have those checks and balances that also protect us from abuses of power, from arbitrary surveillance. So actually, these excuses about, oh, well, we need to do this and it's justified are not true. They're the very clear legal threshold. And what we're seeing is pretty much every single European country is not meeting these legal thresholds that are designed actually to keep us safe. And when we think actually about what it means to feel safe and to feel secure, to know that you can express yourself, whether that's your gender identity, your religious beliefs, whatever it might be, you should be free to be yourself and secure knowing that you can vote, you can attend a community event, you can participate in the things that make it worth living, the public activities that make us feel a part of something bigger than ourselves. And I've said this before, you only have to look at every dystopian science fiction film and book ever to know that the thought of society, whether someone breathing down our neck and capturing every bit of us to put in a database and turning us into walking barcodes, it's not a society where everyone feels safe and secure. It's actually one where we're treated as if we're a criminal suspect all the time. And we know that once people understand it in those terms and see the fact that this doesn't contribute to this sort of free and vibrant and open society that I think the vast majority of us want, I think then it's a lot easier to understand the nuance of this debate and to not necessarily pay attention to a lot of the either political statements that we hear or often the marketing. You hear tech companies spout all sorts of glossy things and often they can't do what they say. We've found many that are not lawful if they do what they claim they can do. But that's a lot of what we hear in civil society. Our budgets are not as large as the tech giants, which is why it's really important that we find creative ways to raise our voices. Yeah, just to add to that, I think biometric data is really a goldmine for government surveillance but also for surveillance capitalism. So when we're talking about this, we're talking about private companies of course on the other side, but we're also talking about government. So there is here a common interest that I feel it's not made clear for obvious reasons to the public. And I feel like there's also a misunderstanding when we're discussing what is the real public interest, the stake, because of course governments and companies can shape the discussion around public interest and what does that mean when it comes to biometric data and biometric mass surveillance, face recognition. But is that really the public's interest? Is it really in public's interest to feel like a suspect whenever you go out of your house in a public space? Is it in your public interest to fear that whatever database that has your face stored on will leak tomorrow and you will never be able to change your face again? Or is this in our public interest? I doubt so. So I think perhaps we can also do better across civil society to have a common narrative about what is it that we want this public interest to mean. Explanations. To me it seems like you're implying that many of these misunderstandings or that people see these technologies in different lights actually comes from not being informed enough and not being informed enough how these technologies work and what implications they have. And it's kind of like what I got from your answer is that the difference of a utopia and a dystopia is like the nuances what you think might happen as soon as these systems are installed. And what do you think? What do we need? We as a movement, as people, as civil societies, what do we need to do to get that knowledge to these people who are making these decisions that they kind of like can come to our side and see that it's actually a step towards a technological dystopia and not a safe and nice future for everybody. Yeah. Well, if I may, I think I'm thinking translation and chewability. We often talk about, you know, human rights, but these are such theoretical, non-tangible concepts that very few of us actually envision something when they spell them out. And I feel like we need to make these concepts much more clear and much more close to the everyday reality of people. I feel like this is something we can definitely work more on. And also personally, I think the role of art is so important at inferring the impact of biometric mass surveillance on our freedoms without mentioning the human right that is at stake, but rather the experience that you will have if you don't take action. And finally, I think, you know, we spoke a lot about dystopias and utopias, but I feel like we're, and I feel like we're doing that already. We really need to envision what do we want this future to look like in order for us to have a common vision and make it clear that our values are common. We're all working towards the same goal here. Ella, please add. Please add. Thanks, Andrea. Thanks, Andrea. That sounds perfect. And yet just to wrap off what Andrea said, absolutely, we need to work towards the positive vision of the kind of world that we're trying to protect and create where everyone is free and not put in a box and not judged and labelled and controlled based on who they are, but actually a world where we encourage that beautiful full spectrum of human difference, human diversity, human ability. So trying to build that world and, as Andrea said, translate it not just in terms of the concept, but in terms of languages, making sure that this is a multilingual, accessible, broad movement that whatever language you speak, wherever you live, wherever you come from, you have a place in the movement. So yeah, I hope that was short and sweet enough. That was wonderful. Thanks for being here today, although remotely, thanks for recording their talk, doing their talk. Yeah, and I think there's not much more to add. Please, now everybody who's watching this and is interested in a further discussion, go over to the discussion room. Please, you too, I don't know if you already are, or please move over to the discussion room, and then we can start the discussion there. And thanks a lot. And yeah. Thanks to you. Thank you. Looking forward to seeing you in real life next year, hopefully. At some point, everything will happen in real life again. Bye. Bye.
|
The EU is making new laws on AI. People are mobilising against biometric surveillance in public spaces. Hear about what's going on in different European countries, what tools activists are using to attack face recognition and similar mass surveillance technologies and - most importantly - what you can do to join the civil society #ReclaimYourFace movement. The European Union is setting new rules on Artificial Intelligence and people want to have a say. This is a once-in-a-lifetime chance to shape how the EU is dealing with intrusive biometric technologies in public spaces. Yes, people are mobilising! # ReclaimYourFace is a civil society campaign that brings together organisations from across the continent with one goal: BAN BIOMETRIC SURVEILLANCE IN PUBLIC SPACES. Led by 12 digital rights member organisations of the EDRi network, the #ReclaimYourFace movement is fast expanding outside the digital rights bubble. The first part of this session will offer a quick summary of the new laws EU Is preparing -we'll explain why biometric mass surveillance is dangerous, undignifying and must be banned. More, you'll get an overview of resistance initiatives to biometric mass surveillance from different European countries. In the second part, you'll hear from people on the ground. Resistance organisers from Italy (Hermes Center), Greece(Homo Digitalis) and Serbia (SHARE Foundation) will talk about community mappings of cameras, freedom of information requests, investigative journalism and requests to the Data Protection Authorities. Finally, you'll hear about what you can do to #BanThisBS.(biometric surveillance). In January 2021, we're preparing a week of action and if you're ready & willing - you have a role in it. More, in 2021 we aim to gather over 1 million signatures part of a special petition - European Citizen Initiative (ECI). The ECI forces the European Commission to formally respond to us and address our call for a BAN. You'll hear a bit also about how you can help reach those 1 million signatures.
|
10.5446/52283 (DOI)
|
We're very happy to have Robert Thibault with us on the channel for his fifth year in a row here speaking at the KS Communication Congress. His lecture, Robert Thibault, you probably know him as a lawyer involved in the Snowden case, his lecture today is called the Continued Erosion of International Law and Human Rights Under Global Pandemic. Good evening. I'd like to thank the Computer Chaos Congress again for having me speak at the convention this year, even though due to the global pandemic it's by remote. But as you may be aware, I'm the lawyer for the Snowden refugees. And also I'm introducing another one of my clients, Ibrahim Hussein, who is a refugee and journalist from Somalia. And just to inform anyone who's unaware at this stage, the Snowden refugees, or a group of refugees from South and Southeast Asia who provided shelter, food and compassionate humanity to Edward Snowden when he was in Hong Kong in 2013, when Mr. Snowden made the disclosures on the NSA's electronic mass surveillance program. And also to provide an update on my role as a lawyer for the Snowden refugees, I continue to act for them as a barrister in their Hong Kong cases. And within Canada, I was granted special authorization to act for the Snowden refugees who still have refugee claims with the Canadian government. And just briefly, I have a slide up with the Snowden refugees sitting together in Hong Kong. On the left side is Vanessa. She's from the Philippines, her daughter's below her, that's Kiana, born in Hong Kong stateless, and is still stateless today. Beside Vanessa is Ajith, the former soldier from Sri Lanka. And beside him is the family of four, Nadeeka Sipunboth from Sri Lanka, and their two children, Satin D and Dinath, also born in Hong Kong stateless. And the seven Snowden refugees, two of them actually succeeded in their cases in 2019. And this is a photo, there's a photo I have up of myself meeting with Vanessa and Kiana at Pearson International Airport in Toronto on March 24th, 2019. A year ago I talked about the decline in human rights around the globe. Nothing has changed since a year ago. And with the COVID-19 global pandemic, things have just gotten a lot worse. Governments have been empowered and boldened to continue to attack those who dissent, who are critical of government around the world. And what has made matters worse are the people's inability to go out and exercise their rights of freedom of expression, association and assembly and protests because of COVID, and for public safety reasons. And the government has used that to their advantage to abuse civilians and society. The media as well has been consumed, in my view, by the global pandemic, as well as other significant global news stories such as the US elections and Donald Trump. What this has done is it's taken the media away, journalists away, from other important human rights stories around the world. So those whose cases or their circumstances are not high profile, a lot of these stories are not being reported anymore, they're not being investigated anymore, which is adding to governments being aware that they can continue to commit human rights violations around the world with impunity. Now I have a client from, originally from Somalia, he's a journalist, and I'm introducing him to the public in this presentation because he fled persecution as a journalist in Somalia and he found himself in Hong Kong for a period of time in an untenable situation and had later found his way to the European Union to seek refugee status there. And I'd like to go into that. And basically, Mr. Ibrahim had covered news stories in Mogadishu and across Somalia, and he was targeted by both the government and Al-Shabaab. It was a situation that Jiki has, and there's two quotes here which I'll read out, which really encapsulate the circumstances on the ground. Ibrahim has stated, in the morning we hugged our family like we might never see them again because every day in Mogadishu journalists may be killed in the crossfire or murdered by Al-Shabaab. And he also stated, for a big story we would bring two or three cameramen to record the scene together in case one was wounded or shot. As a lawyer for Ibrahim, I've actually seen footage that they've recorded of people on the front lines they were with being shot dead. And this is a horrific situation for any journalist to be in and to report in. The situation for Ibrahim came to a crossroads in 2009 when he was kidnapped by Al-Shabaab. He'd been targeted by the government as well, the police and also officers at the Ministry of Information and Culture. But it was Al-Shabaab who grabbed him, tortured him, threatened him, didn't kill him with a knife, gunned to his head and demanded ransom of 18,000 U.S. which fortunately his family was able to secure and after six days as a hostage he was released. Now Ibrahim had worked for University of television in Somalia during two periods and he had fled Somalia for a period of time to try to find refuge in another part of Africa which didn't work and then tried again where he found himself in South Sudan which was, there's no durable solution there for him. So in September 2013 he fled to Hong Kong and he saw it asylum there. His thinking was that Hong Kong had a reputation of civility and rule of law but upon his arrival he realized that he'd been seriously mistaken. Immediately he was arrested and detained at the Kalsupik Bay Immigration Center. For short we call that CIC and it's basically Hong Kong's version of a Gulag and there's an award winning human rights story by Olivia Chang from Hong Kong called The Invisible Wall. I've provided the link on the slides so you'd be able to read an English version of that story. Now after being locked up for three months he was released on Recognizance which is provided with a paper that typically foreign criminals are provided with and on the outside he faced destitution and racial discrimination. And he was constantly racially profiled by the police, stopped all the time, threatened and Hong Kong society itself just basically ignored him. It's like it's as if he didn't exist. He had no food or money for the five months after he had been released from detention and International Social Services, a Swiss organization with a branch in Hong Kong provides humanitarian assistance as a contractor for the Hong Kong Social Welfare Department but still for five months he was destitute, no food and no money. And in 2013, my slide there's an error it says 2014 but in late 2013, Ibrahim showed up at my office with another one of his colleagues who had worked for him in Somalia and they were wearing bedroom slippers and used clothing and they were starving and I immediately took up their cases with the UNHCR in Hong Kong and subsequent to that meeting in my office my wife took both of them down to out of her own pocket to purchase shoes for them running shoes and also to buy them some food. Now Jiki is what I would describe as a victim of constructive refalement and I'm going to go into the law in that in a few minutes but basically the Hong Kong government has a legal and policy framework that's designed to break down the mental health and physical health of asylum seekers basically through social isolation and deprivation of sufficient humanitarian assistance so that they don't starve. And Jiki described the situation a few days ago looking back after the asylum seeking community protested and occupied social welfare and international social services offices in 2014 protesting not having enough food or rent money to survive it felt like my mind was breaking I felt I would die in Hong Kong. Ibrahim's mental and physical health declined in Hong Kong to the point where it was a choice between not surviving in Hong Kong or trying to get to another country. The South China Morning Post reported his situation as a journalist and a reporter stated an experience of the worst in humanity was not what Ibrahim Muhammad Hussein expected when he touched down in Hong Kong eight months ago fresh from persecution in Africa. Now I mentioned constructive refalement and this is a framework and a strategy that's implemented by the Hong Kong government and professors at Chinese University have described it as follows given that a necessary consequence of the government's policies is social exclusion and destitution there are major concerns particularly for the mental health of refugees. This is especially the case because refugees stranded in the territory faced in definite periods while claims are processed all the while plagued by uncertainty. Such concerns not only raise the issues of compatibility with the ICER and ICPR but also place the individual concerned at risk of returning to the source of danger that's offending the doctrine of constructive refalement. So Hong Kong's prohibited from returning anybody who's seeking asylum in Hong Kong until after the cases are screened and rejected but the Hong Kong government in parallel with that policy that they have to follow the law they have to follow the screen refugees or asylum seekers is they make their lives so miserable so difficult that these asylum seekers mental health deteriorates to the point where they give up and they would rather return home to die there. Ibrahim left Hong Kong but under international law and Hong Kong's policies they sent him back to Somalia he was quickly targeted again and over a number of years he sought internal flight relocation alternatives within Somalia it didn't work and then he finally left Somalia and found his way to the European Union ending up in Greece. The first camp he was put in is the Maria Mariah refugee camp which Ibrahim describes as a place of violence there is violence on a daily basis there was a lack of resources, food, the conditions were inhuman and degrading and Ibrahim himself and the people in his makeshift structure where they stayed were attacked on seven occasions during that time and the other thing that Ibrahim has stated is that the police just stood by and watched they allowed that to happen they acquiesced to the violence against other refugees and the plus side of Ibrahim making it to the European Union was there is a screening process that proceeded quickly compared to Hong Kong and the second plus is that the screening system in the European Union countries actually grants refugees that status if they can make their case that they have a well-founded fear of persecution in Hong Kong the acceptance rate was zero and Hong Kong not being a signatory to the UN convention relating to the status of refugees even if you succeeded you could not obtain refugee status in Hong Kong and you could not resettle there and this is a picture of Ibrahim at the Mariah refugee camp there was a arson and a large fire at the camp which left him five days on the streets outside the camp and then he would be sent to the Lesbos refugee camp thereafter now with his acceptance as a refugee in June 2020 he was still stuck in the camp and here's another photo of Ibrahim when he went to the Lesbos camp there are no toilets no showers lack of resources and and in late 2020 the Greek government moved allowed Ibrahim to leave the camps and he's now living in a Greek community supported only for a limited period of time by the government in January 2021 there's a mistake on my slide 2020 should be 2021 he'll have to fend for himself and I I've brought up introduced Ibrahim here to everybody today because he's been in both Hong Kong and he's found his way to the European Union there are serious problems on both sides of the globe but he is grateful that the proper screening is apparently happening in the European Union and he's now safe in my view he's an extraordinary person extraordinary journalist and there are very few journalists like this on the planet with his commitment and willingness to have taken the risks risking his life to report do reporting in Somalia Ibrahim would like to continue working as a journalist again he's just landing on his feet in in Greece right now and waiting for his formal documents all those formal documents to be issued but he's looking to continue to work as a journalist now I'm going to go I'm going back to Hong Kong and and I'm going to be returning to the snow and refugees but I'd like to provide a quick update on what's happening in Hong Kong on the slide I have up here now basically shows from 2014 to 2019 the rapid decline in human rights in Hong Kong and in particular rendition enforced disappearances ill treatment torture and even attempted extra judicial killings and from 2004 to 2019 you know Hong Kong has become authoritarian and to review what happened last year Carrie Lam the chief executive of Hong Kong had wanted to bring in an act into a law next tradition bill which would allow rendition of individuals from Hong Kong into mainland China there would have been a formal legal mechanism to do that the Hong Kong people and lawyers in Hong Kong are very well aware that hot that mainland China's judiciary is not independent it's like any other government department of the executive it's policy and politically motivated in terms of how judges in mainland China try criminal cases due process rights are limited in in China in the Chinese criminal justice system and at times do not exist and I'm going to mention the case of Michael Spavor and Michael Colverig two Canadians who were detained into December 2018 innocent Canadians have done nothing wrong arbitrarily arrested detained and this year earlier this year charged with crimes in mainland China and the mainland Chinese government has effectively admitted that they've they're holding the two Canadians hostage as a bargaining chip to pressure the Canadian government to to bring an end to the extradition proceedings against the Meng Wanzhou of the Huawei Chinese telecom group. I was significant so I'm mentioning this because the the two Michael's cases in in China highlight you know the deficiencies the clear deficiencies and shortcomings of the criminal justice system in China and Hong Kong people you know were not prepared to accept that they would face justice or accused of a crime and have to go through a criminal justice system in China mainland China so protests broke out people in the millions went into the streets from June 19 2019 onward and it was in September 2019 that Carrie Lam the chief executive of Hong Kong announced that the extradition bill would be withdrawn so that was a great success. Now with COVID-19 human rights violations in Hong Kong have just continued on and they've become worse and the Hong Kong government recognizing that millions of people would not be going out in the streets to protest under COVID-19 because nobody you know everybody understood they is a public serious public health risk and people don't want to get sick so there were limited protests but Hong Kong and Beijing calculated that you wouldn't have the millions of people on the streets and they introduced a bill that Beijing actually did this through their own legislature enacting a constitutional provision under the new net under article 23 of the Hong Kong Basic Law which is Hong Kong's constitution basically bringing in new crimes against the state and I've put in the slide those crimes a secession subversion terrorism collusion with foreign forces that legislation is a constitutional provision it's ambiguous it's poorly written and it can be interpreted in a way because of its ambiguity interpreted and used in an arbitrary way and used to violate the rights of any any civilian in Hong Kong I'm going to go into the scope of this law if one is arrested for persons arrested under the new national security law there's no presumption of bail anymore the defendant actually has the burden of proof on them to seek bail definite detention if bail is not granted that person can be sitting in a remand in a jail for months or years before trial the trials are going to be held behind closed doors the judges are actually selected by the executive branch of government they select judges from the judiciary but it's not the chief justice of the Hong Kong court court final appeal that choose selects the judges to be on that list that the government chooses in this law anyone is accused of a national security fence under Hong Kong's basic law can be renditioned to mainland China to face justice there so what the Hong Kong authorities were not able to achieve in 2019 they've now Beijing has now achieved so that anybody in Hong Kong who is accused of committing a national security crime can be brought into mainland China and face justice there extraterritorial criminality anybody who writes something says something does something that is critical of the Hong Kong government if the Hong Kong authorities feel that this is an act of secession or subversion they can seek the extradition of that person let's say in Canada or in Germany or another country so this new national security law has a global reach okay consequence of this is countries in Europe including Germany the United States Canada New Zealand the UK Australia have all either suspended or terminated the extradition agreements treaties that they have with Hong Kong the reason being is the new national security law is a de facto backdoor to for Beijing to extradite people from around the world very few countries have extradition treaties with mainland China because of the shortcomings in its criminal justice system and I've put up a slide just listening a few countries that have suspended treaties and now within the Hong Kong legal government itself three branches government the executive branch legislative and judiciary it's quite clear that Beijing now has firm control over the executive branch of government Carrie Lem you know loyal to Beijing you know following through on directions from Beijing also in 2020 we saw officers from mainland China now working in Hong Kong side by side with Hong Kong civil servants and basically advising and directing the legislature a number of pro-democracy legislators were removed actually by Beijing and new legislation was imposed by Beijing that anyone who is viewed as a risk to national security okay without trial can be removed from the legislature there are supposed to be elections for the legislature in earlier this year they were counseled because of COVID and with the new new law imposed by Beijing there was a mass resignation by the pro-democracy legislators in Hong Kong so effectively Beijing has taken control of Hong Kong's legislature the last or third branch of government the judiciary there's been a number of cases where judges have brought in their political opinions most significantly a non-permanent judge of the court of final appeal Australian Justice Spiegelman resigned in September this year from the court of final appeal citing the new national security law freedom of expression in Hong Kong is frozen freedom association assembly and mobility of severely diminished Hong Kong was ranked 18th in the world in terms of freedom of the press and journalism but in 2020 it's it fell down to 80th place in the news right now has been the arrest of Jimmy Lai the founder of apodilly under the national security law bail was denied but he secured bail last week from the high court the department of justice director of public prosecutions has filed an appeal to that to the court of final appeal to seek that the bail be revoked for Mr. Lai there's also other examples of journalists like Troy Eucling of RTHK what the Hong Kong authorities have been doing the last year is if they if they cannot find a basis to arrest an individual journalist or activist or politician on under the new national security law they use some draconian laws or try to find some technicality to you know arrest somebody first something that's not even related to their work simply trying to shut shut up or stop the media from speaking and writing their stories in 2017 there's been an exodus and on mass from Hong Kong it's been quiet it's been steady but that exodus accelerated in 2019 and and accelerated even more in 2020 with the new national security law and talking to clients and colleagues the the shipping and freight companies in Hong Kong are overbooked there that data indicates that there's large numbers of people leaving Hong Kong they do not see a future in Hong Kong now Canada started accepting refugees earlier this year and in September Canada started granting refugee status of Hong Kong people who have been politically persecuted because of their participation in protests or they voiced their opinion. Tsong Peiwu the Chinese ambassador to Canada and Ottawa spoke out and I'll quote what he said we strongly urge the Canadian side not to grant so-called political asylum to those violent criminals as refugees because it is interference in China's domestic affairs and certainly it will embolden those violent criminals. It was on 15th of October 2020 and what is interesting here is that China is actually a signatory to the UN convention relating to the status of refugees and the refugee convention forms part of China's constitution and part of that is to respect and recognize that other countries will screen asylum seekers and grant them refugee status if they show well-founded fair persecution whether it's religion, ethnicity, race, nationality, political opinion or other social group and what's also interesting is that in under the refugee convention under article 1f and 33.2 but 1f in particular that if anybody had committed a serious violent offense let's say in Hong Kong even if they were granted, recognized as a refugee they would not be granted refugee status because of that violence. So the Chinese ambassador to Hong Kong apparently doesn't understand the law and doesn't understand or respect that Canada will be looking at whether any asylum seekers have committed offenses that would exclude them from that protection. Over the last year we've seen legislators, former legislators, members of political opposing political parties flee the jurisdiction. What Hong Kong authorities in Beijing have been trying to do is to find bases or whether they're well-founded or not on evidence but to arrest them put them into the Hong Kong criminal justice system and in applying for bail typically a condition is they hand over their travel documents. So there are a lot of activists and politicians in Hong Kong who can't leave because they don't have travel documents. Beijing and Hong Kong clearly want to close the borders on anyone who expresses dissent against the Hong Kong or Beijing governments but there have been legislators and political activists who have fled and I've put on a slide Ray Wong and Alan Lee fled in 2017 about that time and were granted refugee status in Germany. Baguio Lung recently fled to the US. Simon Chang, Hongkong Lao, Sunny Chow, Ted Hoi, Nathan Law, Wayne Chan, Samuel Chu have all left Hong Kong and they're all seeking asylum, political asylum in Western Europe or North America. Now coming back to the Snowden refugees and what I'd like to do before going into the situation with one Snowden refugee in Hong Kong, I'm just going to give you a quick update on Vanessa and her daughter who are now resettled in Montreal. They've had a hard time of it during the pandemic in Montreal and I've put up a photo from September 2020 which really shows, you know, it really projects the feeling, you know, after almost a year of having to practice social distancing and other safe practices so they don't get infected or infect others. A non-profit, just to update everybody, a non-profit was set up in June 2020. The previous private sponsors in Montreal had stopped providing support to Vanessa and Kiana in April 2020 which put this single mom and her daughter in a terrible situation without any food or rent and the last of the money that was provided by the private sponsor was provided in early May. So as of June, this family had nothing to survive on. So I contacted people I know in the Montreal community and they stepped forward and they set up a non-profit organization called HelpVanessa.com. Oliver Stone, Academy Award winning director, Shalene Woodley, who starred in the Snowden film, also an Oliver Stone film, stepped forward and advocated in support and to ask for donations for Vanessa and her daughter and to date we've raised more than 50,000 Canadians which has now allows Vanessa and her daughter to remain safe and secure during the pandemic and also to continue their French language studies. And this is a photo of them in November 2020, just last month. And this was Christmas Eve. Kiana's on the left, Vanessa in the middle. And the third person is Mintum Tran who's the founder of the non-profit HelpVanessa and Kiana. I'd like to quickly mention him. He's the son of a refugee family originally from Vietnam after the war in 1975. The family resettled in Montreal and he was born in Montreal. He's a pharmacist and executive director of the association professionnel de pharmacien, salary de Québec. He founded the non-profit HelpVanessa and Kiana. And he's also founded a new non-profit called HelpAjith.com. I'm going to go into Ajith's situation. This was 2017. I've put up a photo of Ajith at the removal assessment section of the immigration department in Caldoun Bay. And this was a week before immigration rejected Ajith's asylum claims. And just briefly, Ajith was injured in the Civil War protecting his fellow soldiers. He was denied medical assistance under the Geneva Convention by the Sri Lankan army. And he was put in an untenable situation where he was looking at losing his life. So he fled. He was a military deserter. He was caught a few years later and tortured. There was an attempt to execute him, but he managed to flee the military camp and he fled to Hong Kong in 2003, leaving behind his wife and a newborn baby girl. This is a, I put up a photo of Ajith in 1993. So you can see the young man that he was. And in Hong Kong, similar to Ibrahim, the Somali journalist, from 2003 onward, Ajith has been subjected to systemic racism and discrimination by the Hong Kong government and its institutions. He's been denied sufficient humanitarian assistance. And I took on his case in 2012. And he's constantly been subjected to racial profiling, even not showing up to conferences, law conferences that I was holding, because the police had stopped him on the street as he was trying to get to my office. There's been discrimination by the, you know, by the police immigration against him. And also there's been attacks by the Hong Kong government against myself with the view of removing me as his lawyer. This is a photo of Ajith a couple years ago. And he's had a very difficult time in Hong Kong. For 17 years, you know, he was subjected, he's been subjected to discrimination as I've described. And all of this has had an enormously adverse impact on Ajith in that his mental condition has collapsed a number of times and he's just wanted to give up. And he's been what I would describe as a victim of constructive refaliment. But fortunately, we were able to convince him. And we have recently tried to get him some help. He's been suffering from post-traumatic stress disorder since before he left Sri Lanka, untreated. And only recently we've been able to get him a little bit of help. According to his role with Mr. Snowden in 2013 when Mr. Snowden arrived, Ajith despite all the terrible things that have happened to him, the persecution he's in discrimination he suffered in Sri Lanka and Hong Kong, Ajith stepped forward and, you know, was more than willing to help Mr. Snowden shelter in 2013. And in 2016, because of Ajith's story coming into the public domain, the Hong Kong government targeted him because of his assistance to Mr. Snowden and targeted myself. And in 2018 Ajith was left without the support of the Dudi Law Service, but myself and another lawyer found a solicitor willing to instruct us privately to continue his appeal. Now in parallel to all of this, I advised Ajith to apply to Canada for refugee status. Private sponsor was found in Quebec and Ajith's refugee claims were filed in January 2017. And while all of this was happening, the Sri Lankan police aware that Ajith is in Hong Kong sent police officers to Hong Kong in December 2016 looking for him. The Hong Kong police instead of investigating the Sri Lankan police made a decision to investigate myself and my clients. Now Ajith's case was rejected by immigration in May 2017 and I filed his appeal in the Torture Clinic Appeal Board. And this is a major update on Ajith's situation in Hong Kong in that his appeal was heard by an Australian adjudicator and barrister Adam Moore who was the adjudicator in his case in 2017. Heard his full appeal in June 2018 and no decision has been handed down in three and a half years. From 2018 to November this year Adam Moore had not handed down a decision and there is no explanation for that. And then suddenly in November 2020 the Torture Clinic Appeal Board announced that Mr. Moore was no longer the adjudicator without giving any reason. And now a panel of three adjudicators would hear Ajith's case and start that process all over again. This is a process that's been delayed and in my view abused by the security bureau and the Torture Clinic Appeal Board there is no rational basis why the Torture Clinic Appeal Board did not hand down a decision on Ajith's case years ago. And this is an example of how this part of the judiciary there is a lack of transparency and accountability. And the second significant event is that one of the three adjudicators is an Australian adjudicator named Fraser Syme and he's one of the three on the new panel of three for Ajith's appeal. Mr. Syme was also the same adjudicator in the appeal of the other Snowden refugee family of Sapun Nadeeka and the two children. And Mr. Syme rejected their appeals and now the Torture Clinic Appeal Board has found it proper to appoint Mr. Syme who's already predetermined decided refugee grounds for Sapun Nadeeka's family that are the same grounds for Ajith's case. So there's an adjudicator on the new TCAB panel that has already predetermined the appeal against Ajith at least on certain refugee grounds. So there's an appearance of bias. There's clearly a conflict of interest. Making matters worse is a judicial review leave application was filed in the high court in January 2019 challenging Fraser Syme's rejection of Sapun Nadeeka's family's refugee claims. So the Torture Claim Appeal Board has put in Mr. Syme's knowing full well that his decision on the exact refugee grounds for Sapun and also Ajith those common grounds may be overturned by the high court. So it's quite clear with years of delay inordinate delay and the removal or disappearance of Adam Moore's adjudicator and the constitution of a new tribunal after so many years. With Fraser Syme on there that he's not receiving a fair process here fair hearing. Now in terms of Ajith's mental health situation the 2019 pro-democracy protests and the police crackdowns Ajith saw firsthand how the police were acting arbitrarily and attacking innocent bystanders protesters and this re-traumatized Ajith. These are the same scenes in the same conduct of police in Hong Kong that he witnessed in Sri Lanka when he was in Sri Lanka. The new national security law similar to the prevention of terrorism act in Sri Lanka is another factor which has to re-traumatized Ajith and he's in fear for his life. Making matters worse there's an immigration amendment bill that's just been brought into the legislature and mind you the legislature has no opposition it's basically probaging controlled and then this new legislation immigration officers will now be able to carry guns and steal batons when dealing with refugees. This is simply going to re-traumatize my client and other refugees. There's now powers to detain asylum seekers effectively indefinitely when they're in Hong Kong. There's new provisions where the immigration officer will decide effectively if interpreters are needed and it'll be the immigration officers of you whether a person screening interviews or appeal should be conducted without an interpreter. The other issue that is shocking in my view is after the first stage of immigration screening if the cases are rejected there are now powers for immigration officers to go to foreign consulates to obtain to start the process of obtaining travel documents that should never happen until after all appeals are exhausted. So what's happening is that all these asylum seekers contrary to UN guidelines their identities are being exposed at the first stage to foreign governments that they fled from fled persecution from. Usually a hearing could not be held before 28 days in the torture clinic billboard now the limit is seven days which again now how do I view all of all of these changes to the immigration legislation. It's just a legislative and policy framework that is going to put more pressure on asylum seekers and it violates in my view the doctrine of constructive refalement. We've set up, Minton Tran and Montreal have set up the non-profit help at jeeth.com and we're asking for donations. Ajith needs support during this time he is waiting as with the other stone refugees the outcomes of their asylum cases in Canada but pending that time Ajith needs help and we'd ask that if you can go to the website and donate no matter how big or small the donations are Ajith needs help. Thank you. Hi everyone my name is Ibrahim Mohamed Hussein I'm a journalist from Somalia. Both the Somali government and Al-Shabaab targeted me I had been kidnapped and tortured the reason I was not killed is because my family and friend is paid ransom money to spare my life. I've lived to Hong Kong only to be treated like a criminal and subjected to racism as I lived in a buffer to be greeted and treated in a humanly. My life in Hong Kong was like a slow death. I was sent back to Somalia and one is again had to run for my life and could not even see my family. I then found my way to Europe leading in Greece. I found myself in two refugee camps fighting again to survive the camps were in Hyuman and degrading many refugees were violently injured and killed inside the camp. What suffered me was the refugee screening which proceeded repeatedly. The human rights in Hong Kong do not exit for refugees but I was lucky to have a human rights lawyer Masarobi Thibault without his help I would not be here today. Thank you very much guys. I know about Ajit Tathar Kain appeared in Hong Kong hospital for three and a half years and I know that the educator for his appeal has disappeared and there is no decision in his appeal after three and a half years. I know that the appeal board is now starting his appeal start all over again. Now with the three judge hearings appeal I think this is so unfair to Ajit. He has waiting for 17 years for his case to be decided after almost four years the appeal board forces him to start all over again. It is the Hong Kong government causing me all this delay. From my own experience in the Hong Kong appeal board the judge was unfair and I cleared traumatize. For Ajit to again have to go through another appeal will be real trauma for him. He will be forced to tell his torture experience again and it will give him nightmares. This is WTF International you have just seen a recording by Robert Thibault. He is the lawyer of Edward Snowden and he is now connected. Mr Thibault. Welcome. Thank you for having me here. One thing I failed to mention in the pre-recorded video discussion is that two of my clients would have done short videos to introduce themselves to the public. One thing I would like to mention here as I mentioned at the end of the video is that Ajit is still in Hong Kong. One of the Snowden refugees who protected Mr Snowden when he was in Hong Kong in 2013 and he does need help. One way he can help is by donations. All right thanks for the we are now taking questions for Mr Thibault. You have several Q&A during our live program. Some of you have asked us questions and if you go into the streaming window below that you have several tabs. One chat window and if you click on that you can see the hashtag which we monitor on mastodon and twitter and you can also join the backend on the IRC. So far there haven't been any questions in the channel. But Mr Thibault are there any other ways and watchers who have just seen what you have presented and you have personal messages by your clients. voices heard in order to foster the cases of those people you represent. Yes as I mentioned a primary way to support my clients in particular Ajit at this time is to make donations. There is a website at helpajit.com where you can make donations various ways from credit card to bitcoin. The other issue is awareness and discussion. There is a lot of talk about the role of whistleblowers particularly in today's world. But there has been less talk about the protection of whistleblowers. The Snowden refugees did the extraordinary by stepping forward making their decisions of conscience to provide shelter and food and compassion to Mr Snowden when he was in Hong Kong in 2013. In all the Snowden refugees cases one of the grounds for refugee protection is the clients have a well-founded fear of persecution based on political opinion in that they made decisions to help Mr Snowden. So that forms a social group those who help or protect whistleblowers. I think that there needs to be more discussion about the importance of people in society. The courage it takes to step forward and help somebody particularly for high profile cases. It's easy to help somebody or a group of people when it's a popular person or a popular cause or if it's a low profile cause. But it's extremely difficult for individuals to step forward to help another. Even though the cause is the most just cause but it's unpopular. So there are legal and moral and ethical issues. I think that should be part of the discussion that everyone should be having. Thank you for that. There has been a question on the chat which I need to rephrase because the question is how to build a global consciousness against and to join forces both from a lawyerly and scholarly groups that believe more in direct action. You'll have to repeat that again. The signal came through a bit choppy. The question on the chat whether there's any efforts to build a global consciousness state of oppression more or less and coordinate between teams that take a more step, leave more in direct action. I think what's happening is you're seeing this kind of action with nonprofits, lawyers through protests and you're seeing it within communities, within cities, within whole jurisdictions. But I think what's happened with the COVID situation is that's basically compelled everybody because of the public health issues, self-isolating, social distancing, masks. We've had to take a step back to think, okay, how do we communicate now? How do we interact and exercise our fundamental rights and freedoms? I think we're in a dangerous period where we're still struggling how to connect globally to cooperate and bring this kind of awareness about. The second issue is to do that, you need to be able to get the message out through advocacy and activism. Right now, the COVID pandemic consumes the media reports. I've been told about 75 or 85% of news coverage in a given media organization, 75 to 85%. At the same time, governments are using the cover of COVID, the global pandemic, to suppress freedom of expression and to strip away fundamental rights and freedoms. So I think the question is a great question and I think it's a matter of, you know, when doing this through encrypted means, doing it where you have your privacy for global groups to consider how do we connect up together, what messages we want to get out, but then the real challenge will be getting the message out through to the public because of the current global pandemic. Are there already, to go ahead and prepare all the messaging to come out of the pandemic if and when it is together? As far, like, I'm not aware of any considered efforts globally. I mean, there are some nonprofits around the world who are trying to get, you know, messages out, trying to get stakeholders with those affected in different jurisdictions. But right now, I'm not aware of, you know, any organized, considered effort to try to have this sort of global connection and being able to speak globally, but also locally, you know, informing the global community what's going on. I think where it's just a difficult time. One of the best examples is Hong Kong with COVID, the COVID pandemic there, there's been four waves. And in the midst of the second wave of the pandemic in Hong Kong, Beijing imposed the new national security law, which basically has stripped away, you know, fundamental rights of freedom of expression. So I think we're just in a difficult time and it's going to be for different groups around the world to figure out how to communicate, you know, hopefully the pandemic will come to an end in the end of this year or next year. You know, see where we go from there. There's one more question from the chat and I think it's a softball pitch, more or less. Free autonomous press or free autonomous media as in decentralized probably on one of getting the message out. The signal is a bit choppy. I missed the middle of your question if you could repeat it. That would you say that a free and autonomous press, autonomous media as in decentralized probably uncensored would be a cornerstone of getting the message out? Absolutely. One thing that I've talked about in past talks is that two things have happened, are happening at this time and have been happening over the last decade. And that is journalism has, you know, mainstream journalism is being eroded. Investigative journalists are fewer and numbered today and journalism has become more centralized in major urban centers and in smaller cities, towns, rural areas, there is no more journalism there at all in a lot of regions around the world. And when those things happen, you have poor behavior of local government in terms of policies, public expenditure and also abuses of human rights. We are really in desperate need of having independent autonomous journalists and journalism at this time more than ever. But at the same time, you know, journalists who have the capacity and capability to do investigative journalism, you know, the problem with what's been happening the last five years, ten years is that the media that's centered in the major urban centers, they're not picking up stories and speaking for the more vulnerable or those who are geographically outside of main areas. And that's a very dangerous thing. So yes, I agree. There should be autonomous media and there should not be censoring on that media. So in fact, if you look at that, I mean, encrypted communication is well and everybody should use it especially to exchange information with journalists. But in the end, for the general public, independent media that is not centralized in some few conglomerates might even be more efficient to get the message out to the broader masses, right? Absolutely. And I think what needs to be done is you need more autonomous journalism and journalists in smaller cities, operating autonomously in the bigger cities to be able to pick up stories. What you're seeing right now with the mainstream media focusing on COVID stories, for example, the US elections and Donald Trump, are that they're not picking up smaller stories. They're not picking up low profile stories anymore. And governments are taking advantage of that. They know that they can act almost with impunity because they know that the smaller stories where somebody in your communities, you know, fundamental rights are being violated by the government or local authorities, it's not going to get reported at this time. When the pandemic is over, the situation will be the same. There's a lack of independent autonomous journalists. One of the big problems is money. You know, a lot of the money that used to go into advertising for mainstream media, even local newspapers, is now going online. People are spending their time looking at, you know, online media that has nothing to do with their local communities or even their countries. People are spending their time on YouTube and Facebook, TikToks and other example, where all the advertising is going. So we have a situation where enormous amounts of money are going to only certain media, some of the mainstream, a lot of it to, you know, social entertainment online and infotainment. And the money's disappearing from that money's disappearing and it's having an impact on two things. One is the funding of investigative journalism. Number two, being able to find and support autonomous local media in smaller cities, towns and rural areas. And I've seen that here in North America. And I know the same thing's been happening in Europe and also in Australia and New Zealand. Alrighty. So this means subscribing to your local small town newspaper might be even as well as a step in joining the revolution as using encrypted messaging. Absolutely. It's got to be a grassroots effort from the ground up. Everybody can take their part. We do not have a stage, so you have to imagine the applause that you're getting via IRC right now. Thank you again so much for being with us, Robert Tyvol. Thank you. Thank you.
|
A brief lecture on continued global erosion of civil rights, dismantling of international law, and escalation of human rights abuses under the Covid-19 pandemic, with particular case examples from the People’s Republic of China, Hong Kong under the New National Security Law and the EU. There will also be an update on the circumstances of the Snowden Refugees, namely Vanessa Rodel and her daughter resettling into life in Montreal, Canada and a dramatic turn in Ajith’s (former Sri Lankan soldier) circumstances in Hong Kong as he is caught before a judiciary lacking independence as well as pending passage of new draconian Hong Kong immigration legislation putting him at risk while he awaits Canada’s decision on his refugee claims.
|
10.5446/52290 (DOI)
|
Hello, welcome. I hope it's not strange that the introduction was actually in German, although the talk will be held in English. But I think this was an awesome schedule. Okay, so welcome to the bits and boimers movement for digitalization and sustainability, the current needs of bits and trees. Just to make the pun complete, I changed the translation of bits and boimers to bits and trees. So what are we talking about today? First, I want to introduce myself. Then I want to talk about some of the topics we're dealing with in bits and boimers. Then I will describe the initial conference in 2018, then the demands that came out of this conference. Then I will describe the movement that grew out of the conference. And then I will outline some ways to act, which then hopefully guides perfectly into the discussion. So first to myself, I'm Rainer Rehak. I have a background in computer science and philosophy. I work at the Weizmann Institute for the Network Society as a researcher, and I'm active in the forum computer professionals for peace and social responsibility. And I was co-initiating the bits and boimers conference. Just one word in advance regarding the framing of environmental system sustainability. I'm not so much in favor of the framing that we have to protect nature, because the earth does not really care about the human beings. So of course, once the humans are gone, it just needs a certain hundred thousand of years, and then everything is okay again. So I think it's really important to say what we're talking about and what we're protecting is also our livelihoods. So we all live in symbiosis, and you could in a technical way say nature provides services we live, we need to live. So you can see nature as its own value, of course, but we're actually just fighting for survival. So this is just to make this somehow clear. So the topics, so what are we, what is the whole thing with the digitalization and sustainability about? Well, first, I would consider digitalization somehow, the computerization, agrhythmization and datafication that takes place all across the board. A computerization means really hardware, put everywhere, IoT and such things. Algorithmization and datafication I think are pretty clear terms here. In terms of sustainability, I want to talk about the ecological, economical, social, and maybe informational sustainability here. So you could say sustainability means a stable condition somehow with a good life that provides a good life for everyone. Well, first I start with the ecological sustainability. Maybe some data on the material footprint of the digital systems, we're using 1% of the global emissions are online videos. That's 80% of all data traffic. If you add hardware and everything, you're maybe around a few percent in energy use for those systems. Maybe one gigabyte in transfer traffic needs around 0.06 kilowatt hours. So that's kind of one hour of Netflix. It's half an hour, 30 watts light bulb plus minus. However, if we take the example of Netflix, they try to be CO2 neutral by themselves, but of course they are intermediaries which cannot be controlled. So we see it's not that easy just to say, you know, I try to be a climate neutral. Some people say Google use the same amount of energy as the city of San Francisco at one point. Google says they have 40% energy saving applied right now. However, the rebound effect kicks in if you say that maybe 100 new data centers are being built. So, you know, it's really, really not so easy to count those numbers. In Germany, there are data centers that last year took the energy of four million sized coal-fired power plants according to Bitcoin and that's maybe 10% of the electricity generation in Germany for internet related things. So what I'm trying to say here is that all those numbers, you can always, it's not so easy to put a clear number on consumption if we take energy production into account if it's all renewably created. So where's the problem? But so we have the hardware. Where does it come from? And so all those questions are quite complex. On the other hand, you could also say increasing online usage. Of course, online banking is increasing, but on the other hand, you may need less branch offices. But maybe the back office is the same. The same question applies with physical meetings or video conferences, which we have right now, the topic. Of course, people are maybe then more in home office, less traveling, less office use. But on the other hand, and it's not a very small point, you have heating costs and electricity generation then on another place, another spot, maybe with different kinds of hardware. Because of digitalization new behaviors emerge. So you can't really say, you know, it's not so easy to say if this gets less, this gets more. So those are complicated aspects. So what I'm trying to say here is it's not so easy if we look at certain small aspects to see if it's good or not. But we have to put a target, we have to put a goal in the terms of ecological sustainability. This is right now we have emissions. And there's a 66% chance for 1.5 degrees with a certain budget. Right now, that means this budget, if we take business as usual, we have around eight years time globally, and then we have to cut to zero to stay within this limits. Or you can also, if you like, not to factually argument, but politically argue to stay within the Paris agreement, which limits the emissions. And so this is the goal. The goal is not, you know, how can we save it a little bit here or a little bit there, but we have to look at those indicators. But of course, there are other aspects of sustainability. And this is where it gets really interesting. It gets really interesting for our movement or for the idea. Because we have the informational world connected to economical and social and informational sustainability. So as I said before, we're shifting our lives into technical dependency. Somehow we need digital infrastructures that are independent from individual use. We have data, information on knowledge that's being reflected within all those digital infrastructures. So how do we deal with this? What does sustainability mean in this aspect, concerning also the software use we use, and also concerning political processes that are maybe enabled by technology and also and also technology has to be made more part of democratic and negotiation processes. You could also look at, for example, Internet and advertising, where right now the ad industry is just used for increased consumption. So you see a very clear connection here between sustainability and digitalization. And this is also part of us always constantly using new devices, if the old ones break, or if they're not usable anymore. So it's resource consumption. So we have to look at the economic parts of monopolies, privacy and surveillance. What does it mean when there's a lot of power over societies and individuals? How does it influence democratic processes? How does it influence the economic processes? How does it influence the economic processes? So we have to look at the digitalization and sustainability. So we have to look at the power over societies and individuals. How does it influence democratic processes? This is also directly in the middle of those two topics. So you could also say there's a representative crisis in democracy, since many people support a shift to sustainability, but somehow it doesn't reflect in policies. So that's a big problem. And we also come to problematic questions like if free software was everywhere, but we should have a look at how this free software has been created. If this is a hobby project of a person, then there's little reliability. But this is of course not a problem of free software. But how could we create an environment where free software is the norm and where the people who work there are not close to burnout all the time? So how to create stable communities? This is also something to learn from sustainability. Yeah, other aspects are maybe electricity and transport grids that needs to be updated and changed according to sustainability goals. Lots of IT is needed there. And if we take, let's say the IT people into those discussions, which are there of course, but that doesn't make it so easy for the sustainability people to fall for the usual blockchain and AI scam. As last interesting topic is maybe trade agreements where usually more and more there is IT policy included. And those are questions of sovereignty and control, especially for the countries in the global south. So we see there's a lot connected here if we open this box, maybe this box. So interestingly, we somehow know what to do. So we need to limit global warning by limiting emissions. Maybe some people suggest CO2 budgets or caps. We need to abolish subsidies, roll out renewable energies. We need more sustainable mobility concepts, maybe vegetarian food, regional, seasonal, down to changing the whole economic system. And in all those aspects, we see digitalization plays a crucial role there. How do we internalize externalities? How do we break up monopolies? I mean, we see that right now with Facebook, with Google, with all those big companies. Is it a problem with tech or is it a problem with monopolies or is it a combination of both? And we also should ask with the application of technology, is the use case, does it really help with sustainability? We all know the paperless office, which now has more paper than before. So obviously, computers did not help in this aspect. But those are the points where we need to take a closer look what technological solutions actually provide. On the social level, we have to stop exploitation, check about fair distribution of benefits of productivity. And finally, informational, we have to take data protection seriously. Maybe you think about common space peer production in hard-end software, but then also other digital goods. And think about free knowledge, open knowledge and free cultural products. Saying that free always doesn't mean it doesn't have to cost anything, but it's not restricting. So as you might see now, this is very, very complex. This was a very, very complex bunch of questions. And so at one point, a group of people decided to make a conference in 2018, maybe a small view backwards. So a group of organizations found each other, I could say, I don't want to read all of them right now. But the idea was to bring together environmental folks, the hackers and techies and the development folks to talk exactly about those topics so that everyone could bring in their abilities and their knowledge and then to get in contact with each other and connect the communities. With the goal of a common liveable future for all and the world for all, of course, that includes a clean atmosphere and that also needs a clean data atmosphere. Yes. Okay. And so the idea was then the reflection on the relation of digitalization and sustainability, but also sustainability strategies for projects and also to bring in ideas like convivial technology. Especially interesting, I found the discussion about the means and purpose relationship. You could say digitalization is a means and sustainability maybe is a purpose. So like growth, which is not an end in itself, but it should help. But if it doesn't help, we should stop it. It's the same question you could make for digitalization in certain aspects. Because right now, how we digitalize, do this kind of digitalization, it's just putting oil to the fire. But of course, the question is not yes, no, but what do we do and how do we do? Do we use centralized systems or decentralized systems and all those questions? Yes. And as a result of this conference, there were some concrete demands that came out. I don't want to go into details to all of them. You can check them out on the website. But the first point was social ecological objectives and the design of digitalization. So social, environmental, and development policy as well as peace objectives should be part of the direction where we're going. Right? We talk about technology. So we can shape it as we need it. And it should also foster human rights climate protection goals as well as the end of hunger and poverty because this is the ultimate goal. And all the other demands, you can check out yourself later. As you see, it goes from data protection, monopolies, democracy, education. So all the questions somehow I've been addressing before. We try to put it in a shape that's easy to understand. So it's a small leaflet actually. And it's supported by at least in Germany, major organizations from the hacker area, the tech area, and also the sustainability and ecological area. So I'm not going into those details right now. But the question was then, okay, we can't control and we don't want to control this whole thing. So that's why we said everyone can use the bits of Boimer label as they please if they adhere to some of our rules. You have to work on this direction of the digitalization of sustainability. You have to concentrate on active science and civil society as we know that companies and especially politicians have their platforms already. So we want to give a voice to the less heard and in our view, more competent actors most of the time. If you support the demands and you live the motto, so you organize those events according to those principles, anyone can use the bits and boimer label as they want. We have the local material under free licenses. You can ask for help under beveegung at bitsandboimer.org if you like. And the result was overwhelming. We have branches, pun intended, in Dresden, in Berlin, in Hanover, in Dortmund, Osnabrück, Cologne. And you have they come from different areas. Some are closer to the chaos family, some are closer to the Obnolage Foundation family. Some are coming just from university backgrounds and from all kinds of backgrounds. We have mailing lists, a forum, a metrics chat. And there's even an assembly here at RC3. You can check it out if you find it. It's always part of the game. And today at 9, there will also be a metrics chat. You can find all this on the website. You can check out the videos of the conference that have been taken place. And yeah, so that's kind of the whole movement. That's why it got decentralized and it's a really good idea as it turned out. So finally, we will get to the last point, the way to act. Well, of course, individual action is good. If you say, I want to be streaming with less resolution, that's totally fine. But it's always clear to state that there's a structural problem here. We have a total asymmetry with a lot of subsidies making the cheapest and the most easy option for everything from food to electronics, the actually most dangerous one for climate, for avoiding global warming. And so this is something that really needs to stop and needs to be changed in policy. But that shouldn't stop us from also starting with small experimental projects with lab projects, with software projects, shape local groups, go to regular table, we should organize somehow. And you can come, of course, to bits and boir more in those different cities if you want or connect to the online events. So and sometimes maybe it's okay to just switch off the computer and go outside. But I want to finish with a quote from Joseph Weizbaum. The question is not how digitalization changes society, but how society uses digitalization. And we try to suggest one way of making it usable globally for a good life for all. And I hope that was not too much and too fast. But now I'm happy to get feedback and questions if there are any. Thanks a lot. Hi, I hope you can hear me, Reiner. Thank you very much for your talk. We are right now asking again on the chats and on social media to post questions about your talk. Maybe we can begin. So did you expect this to become some kind of distributed movement, something that started from one from event really? We have actually not planned this. But later on, we found out that it's impossible to first to contain it, which we also don't want. But it's also not possible to coordinate this, because some of us are volunteers, especially in the tech area. So this is just not possible. And decentralization is always a good thing. That's why we put up those principles. But from the beginning on, that was not the idea, but somehow it got to life. And it turns out it was a good idea, because at least in the German-speaking area, this label has become something like an indicator for certain discourse. If we think, for example, Silke Helfrich, she organized a project 10 years ago, genes, bytes and emissions that already tried this. But then there came different names and different discourses. So it was hard to trace that back. But maybe it works that this kind of open label also helps that people who work on the same issues also find each other better somehow. So we got a question on the chat. Jan is asking does this mean that there are no big bits and bohrme conferences in the near future? No, that does not mean that. Let's say there might certainly be a big bits and bohrme conference in the future. But this should not keep anyone from organizing small ones or other big ones. But let's say some seeds might be already planted and let's see what's happening. That's good. We have another question coming up right now. And I seem to have lost it. No, are there any distributed online events or meetups one could join? I think you went into this a bit in the end, but maybe you could repeat that where people who are now interested in this can actually meet others. Yes, definitely. Not only because of the pandemic situation right now, but there are meetups planned also for 2021. Well, of course, 2020 is not that long anymore. You could check out on the website. There's a connection to the forum and to the Matrix chat. And there we will not be. I'm also not that connect, but I know it will take place. There you can find the connection to those local tables. And the plan for 2021 is to have a bigger exchange that goes just across the cities. I think this is the place to go to check. But this is definitely in plan and this is certainly a good idea. So you did this talking in English right now, despite this being something that originated in Germany here. What's basically the internationalization idea you have in mind? Exactly. So the idea was somehow that a lot of those, the work we've been doing and coordinating, I see that it's necessary to distribute this to somehow say, hey, people have been thinking about this already. And for example, in the conference 2018, already the talks, all of the talks have been translated to English as well. So if you check also at media CCCDE, you can always choose the language track English. But we just noticed that this was nice for the people who have been there, but it has not gained broad attention. And so this is just an idea to maybe find others who've been working into this in this direction to see that there are other initiatives working and then to join powers and somehow try to steer the ship into a more sunny direction again. I can't hear you talking right now. Sorry, yeah, I didn't want to. I'll just say again, I wanted to say this another question coming in. Coolish is asking, where can I see some of the projects that took place in the past two years since conference? I guess the answer again is your website, maybe? Yes, it's partly, but it's partly a bit distributed. First, the website is a good start. But let me see. There have been conferences in Dresden, for example, which you can access via the website, but Dresden.bit.s. environment.org. So you can find the documentation there. But I think the forum would be a good idea to ask there if you can't find all those other things. And there have been also smaller events like on the Internet Governance Forum 2019, where we were present on Jena at the Great Transformation Conference or the Forum, which takes place every, I think, three months, which is a discussion format in Berlin, always to certain topics. And we try to somehow announce it on the website to get this together. But as I said, if people would like to join, we're happy if you're visionary and bring in your ideas and your content. That's really great. But with all projects, it's also nice if you say, well, I actually think it's interesting what's happening there. I don't have the big vision, but I'm happy with tracing what has been happening and putting it in our history log in the calendar, which we already have in a very basic structure. This has also greatly helped so that other people don't have to do this work twice. So that's why we will find some of them on the website, but not all of it. But we're happy if this could be archived in a more structured way. Yeah, that's also always very important with community work to put in the hours and actually do the archiving work so that it's preserved for anything that comes up later. Yes. Yeah, I mean, this is a classic example as well of sustainability. How do you create a sustainable project or a sustainable community? Of course, if new people come in, where do they start? You need some kind of memory for this in an organizational way. And so this is a very interesting instance of what sustainability also can mean. It doesn't always have to be some crazy new ideas. But if we think about digital archiving and all those questions, this is all part of it, of getting a livable digital environment. Thank you so much, Reiner. I think that's all the questions we have from the audience tonight. Sorry again for doing the introduction in German. I was just in my mind coming from that. But I mean, you did it. Anybody in the audience, if you can't find bits and boing because you don't know how to spell it in German, you can try to get if you find Wikipakar, that's our name on Twitter. And we have a new website, we just built today, wikipaker.wtf. Basically, just click on anything you'll be linked to, to our far plan to our digital schedule, where you will find information about this talk and all the links that Reiner provided. So this will end you with the information you need. And go to the assembly in the RC3 world. So we are there as well. Oh, yes. Yeah. So please come find the bits and boing assembly in the RC3 world if you have a ticket. Reiner, thank you so much.
|
The interaction of tech and eco activists gets more important every year we are heading deeper into the climate desaster. The talk explains the basic topics and shows past and current activities regarding our open movement as well as explaining our concrete demands.
|
10.5446/52305 (DOI)
|
So our agenda for today is that we will have a look on key points of data journalism. We will quickly explain what wiki data is, what tools you can use inside of wiki data for data visualization, what other third party tools are there for your research, then we have a look at critical research done with wiki data, and finally we have a critical look on the data of wiki data itself. Key points of data journalism are that you want to interview a data set. So you want to find connections, correlations and causalities behind the data. Also you want to visualize the data in a compelling way. And you want to write your own story. You want to find a new spin and a new look at the facts. And all of these things you can do with wiki data. At wiki media Deutschland we want to support evidence based reporting. That's why we want to support you in using wiki data. Also data journalism helps you to tailor your story to the users or your readers. Data journalism helps you to create visual storytelling instead of walls of text. And this again helps you to convey facts faster and way more easy and that makes your story way more inclusive. So how do you get to a story with wiki data? You want to find and recognize patterns in a data set. You can search for geographical data. You can search for similarities and differences in the data. And you can also search for missing data. Because that also exists in wiki data. You can visualize your findings with the tools that you find in the wiki data query service. And what's most important is you can connect to the wiki data community and find people who are working on a similar subject or have a similar research question to the one that you have. So I included this visualization to show you that data is only the beginning of your story and the path that you will take. You want you to use the data in wiki data to create a compelling story and therefore contribute value and your idea about what's in the data. Because data is a lot but it's not everything. As we've seen in the last months, many people aren't convinced by facts. Also there's a lack of time and there's a lack of data literacy in our society. It's not always easy to understand the complexity of historical events and developments, to understand the complexity of medical data or demographic changes. So it is important to have a storytelling aspect to your data, have good visualizations and an easy to understand approach to convey the significance of your data and your story. And finally, it is important to remain transparent and clear about the use and analysis of the data. So what is wiki data? Wiki data is a free linked database that can be read and edited by both humans and machines. So it is a database of linked open data. That means that the data doesn't just sit there in tables, it can be connected and combined with other data, find on wiki data. As such, it is a realization of the semantic web as dreamed by Tim Berners-Lee and also wiki data won a prize for its realization of the semantic web. We just celebrated wiki data's 8th birthday. It currently holds 90 million items and has 44,000 active users and contributors, which makes it the most edited wiki media project. It was initially used to or thought of to support the projects of the other projects of the wiki media ecosystem and seen as a central storage for the structured data of the sister projects like wiki voyage, wiki source and the most famous wiki media project wiki pedia. But it also has another function, which means which is to provide free and open data to the internet. And that became really huge. As already said, we now have more than 90 million data items on wiki data. A colleague of mine created this map and you can see here the geolocation data that is in wiki data. And we are very proud that it's distributed all over the world. But we also take it with a grain of salt because as you can see, it's very bright in Europe and on the east and west coast of the US, but there are very dark spots where we can't record the knowledge in the same way as we do in our western societies. And that brings us to the question of what is knowledge equity and how can we actually best serve everybody in our global society. So how does it work? Wiki data holds items which are real things or concepts in the real world like Berlin, Barack Obama, helium, and these items are identified with an ID, the QID. So Q76 or Q... I can't read the number now. So these items have labels, descriptions, aliases, and side links. Labels, that means it's described in all of the languages that wiki data holds currently. Those are around 300. Descriptions are forms to describe what the item holds. And aliases sometimes, one item has several names, etc., etc. An item also has properties. Those are used to label the data like a person is born somewhere, its date of birth or death or the location of a specific building. Statements hold information in properties. So P47 shares the border with another country or the population. Statements also have qualifiers to expand the information and then also they have references, which is very important because for scientific research you want to have those references. So here we see again our item, Berlin, Q64. The property is the population of 3.7 million. So what's new about research with wiki data is that you can ask your own questions. Before you would go to a library and some librarians are awesome, but they would give you books with specific facts and you would consume them and try to use them for your research. At wiki data you can ask very specific questions that nobody else came up with before. So for your research, you want to do your own wiki data queries. That's what we have the wiki data query service for. The good news is that you don't have to learn Python or R or become a data scientist, but you want to learn a bit of sparkle. We included a few resources here in this presentation and there's also going to be a talk given by my colleague Lukas on the 29th on how to query wiki data with sparkle. We also have a guided tour on wiki data on our website, which I can recommend. So as said, once you query your data, you can visualize your results for more compelling storytelling. And there are several ways of doing this and I'm going to show you some of this just to give you an idea. You could, for instance, ask the query service to show you airports that are named after a person and color code them according to their gender. Gender of the person, not the airport, obviously. You can ask the query service, show me everything connected to the item Berlin. You can ask it to show you the population of the countries that are bordering Germany and how it developed. You can also ask the query service to show you the most common cause of death among noble people. Or here it shows you an historical overview of space probes. For all of the children and grandchildren of Genghis Khan. So we had a look on the visualizations inside of wiki data's query service, but there are also tools that use wiki data's data for their own visualizations and I'm going to show you some of them now. So here is histropedia, which makes time beams of historical events using data from wiki data. This is en van der. Basically it lets you create your own private library and then uses the data from wiki data to describe the publications. Here is ask me anything that's done by different researchers in Europe and it lets you pose questions in natural language to wiki data. So you don't have to use the query service. That's a way to use wiki data that's also used by a lot of voice assistants like Siri and Alexa. And here you have skaldia, which is basically a platform for scientific publications that are published under open access and collected and it can answer you questions like who published, what paper, with whom, who and when, or who wrote the first paper on COVID, when was it published, etc. And here we have some of all paintings. Basically it's a database that creates all of the paintings in the world and lists their metadata so you can combine it in your own specific way. So I showed you a couple of examples what you could do and I want to hint at other researchers who did great stuff with wiki data and used it for very cool storytelling. If my slides work. Okay. Here we go. So women's representation and voice and media coverage of the coronavirus crisis. That's a study done by a researcher called Laura Jones regarding the representation of female experts within the coverage of coronavirus. It uses evaluations of wikipedia and wiki data to show how much representation was there of female experts. And as we see it's not a lot. Finally there's another great example I want to tell you about. It's the project called enslave.org. It's a linked open data platform based on wiki base which is the software behind wiki data and it basically shows or it collects and connects data related to the transatlantic slave trade. So people who suffered under the slave trade and the records that were done by the people active in this slave trade, those data is collected. It has been collected in several databases and enslaved, built one large database to connect them and rebuild the stories. Which I think it's a really great idea to or a really great way to humanize people who have been dehumanized with data. Like you can see here, they collect data from newspapers and from the slave holders to recount the story of individuals. So finally I also want to talk to you about one thing at wiki data that is always on our minds which is that wiki data is not perfect. I highly recommend the talk by Ozkeys, question in wiki data in which it is explained that all classification systems are inherently dangerous. Wiki data is a large encyclopedic classification system which makes choices, ethical and political choices about what is notable, about how to categorize information. And these choices, they reduce complexity and they reduce also specific forms of history, like oral history. This reduction has consequences. We know wiki data is used by many programs, apps, voice assistants and what and how we store information and wiki data really matters. So we ask ourselves what is encyclopedic knowledge and how can we organize it in a more inclusive way. Encyclopedic knowledge is a Western concept and we can and must do better than just use our own Western view to organize the world. But then also the wiki principle applies, we have a huge community behind wiki data that helps us to make these decisions and you can also become a part of this by researching wiki data, using it for your work and also contributing your research. So once again I want to tell you, you can use wiki data as a tool for your storytelling. Wiki data can help you find connections between data, wiki data can help you build visualization in its query service. You can ask questions about historical data, correlations more critically than you could before. But there are also downsides to wiki data because it is an encyclopedic way of organizing Western knowledge. So this was only a start. I'm looking forward to our Q&A session now and if you have further questions, concerns or have ideas, you can contact me and my colleagues and you can also contact me individually. Thank you. Hello and welcome to Elisabeth. Thank you very much for your interesting talk. That was a very great introduction. Hi, thanks for having me. I'm happy that I was able to talk a bit about wiki data and how you could do storytelling with it. I wanted to add that obviously you can ask me questions now, but also I want to hint at the great introduction of wiki data that one of my colleagues gave yesterday, two of my colleagues, which is already online and tomorrow there will be a query service workshop where you can learn a bit more in depth how to query wiki data. Yeah, that's a very good hint. There's actually two questions in the chat right now. The first one is, are your slides going to be published because people are interested in your links to the tutorials, obviously? Yes. That was, I asked before, I think the talk will be published and the slides. Is there a wiki packer board where I can put it otherwise? I can also put a link on our Twitter account, wikimedia.deutschland. I think Twitter for now would probably be the best idea actually to check on the wiki packer board, but we will let you know where you can find everything. I put it on wikimedia.deutschland.twitter. It's at WMDE, I think. We will also read it obviously, you will find it, I promise. There's another question, what resources would you recommend for self-studying the writing of queries for query wiki.wiki.org? I put some links in the slides. There is, yeah, we have like a few tutorials on wiki data. I was also a couple of months ago a very nice and very easy tutorial published by wikimedia.israel. We didn't do it, but I can recommend it. It's a very low-key introduction to your first queries. We will also publish that somehow. I have a question for you as well. You mentioned that wiki data is like a great way for meeting other people that are working on similar topics. Is there some kind of greater community of journalists using wiki data? So far, the community is mostly research-based. That's also why we wanted to reach out here. I would recommend getting in touch with the community on there regarding the research topics that you have. You can also get in touch with us and we connect you. I have a noise in my ear, but I hope it's only me. I don't have it. It might just be you, but I feel like there might be also an echo on the stream that let's us know what people on the chat are saying. So I don't have any other questions in the chat. Since there seems to be an echo on the stream, I don't want to annoy people any further. So I would suggest for everyone who has further questions to you that you can meet in our big blue button meetup room that I will be posting in the chat right now. We will continue our program here at 2.20 with another talk about Flutter by the one with the bright. So I am saying bye for now. Thanks. Bye. Bye.
|
Data journalists work similarly to scientists: they formulate (research) hypotheses, analyze data and sometimes even collect data themselves. For this, open data ressources are crucial. They can can reveal unfamiliar correlations, lead to new questions and stories. Wikidata is the largest linked open data ressource on the internet. But what exactly is stored in it and how can journalists use it for their work? We will have a look at the way Wikidata works and how it can be used a.k.a. queried for stories and research.
|
10.5446/52310 (DOI)
|
Hi everyone, I'm Lea, here's Mohammed and we're going to introduce you to Wicked Data today. Yes, hi everyone, so in the course of the talk if you do have a question just feel free to ask them in the chat and then we are going to try and answer them at the end of the talk. Yes, so let's dive straight in, what is Wicked Data? Wicked Data is a free knowledge base that is based on facts and references that anyone can edit and reuse. It is part of the Wicked Media Project and like all of us to start open projects, Wicked Data is multilingual and has no language barriers. Data in Wicked Data is released under CC0 license, that means Wicked Data's data is in the public domain and it has no exclusive intellectual property rights that is applied to it. Wicked Data is not a primary source of information. It only aggregates or collects structured data that is already available, some of which are linked to other databases. So it is not meant to be a place for original research. Wicked Data is made for humans and machines and is available for everyone's use, whether on other Wicked Media projects or outside of it. Next slide. So what is in Wicked Data? Wicked Data was launched some eight years ago and was originally created to solve the problem of unstructuredness in the plain text format that information in Wicked Data is rendered in and also to provide a central storage location where all of the different language Wicked Data can connect and talk to each other. Today, Wicked Data has since outgrown its intended purpose and has become so big and successful that it is not only, you know, the most edited Wicked Media project. But Wicked Data's data is now used more outside of the Wicked Media project than within it. There are more than 25,000 active editors. That means people who make at least one edit every month. Wicked Data is used across 800 plus Wicked Media projects in more than 300 languages. And it's interesting to note that the largest proportion of Wicked Data's items are in the category of scholarly items comprising about 30% of the whole. Next slide. So far, people in bots have made more than 1.3 billion edits to Wicked Data and created more than 91 million items. This map you see here is a visual impression of duly created items currently existing on Wicked Data. So the bright areas are items that have a coordinate location property added as a statement. Next slide. So Wicked Data has a vision. And what is this vision? Wicked Data's vision is to give more people more access to more knowledge. So Wicked Data gives access to information regardless of the language that people speak. Because Wicked Data is multilingual, it expects translations of so-called QO numbers into different languages. And so doing Wicked Data helps us support the smaller Wicked Media projects data by helping them to benefit from all of the work that the bigger projects are doing. And applications and projects outside of media are also able to benefit from the rich data set in Wicked Data. So in a nutshell, Wicked Data can be taught off as an online repository of structured data that anyone can edit and reuse. Next slide. Okay. Now how is Wicked Data connected to Wikipedia and to the other Wicked Media projects? Among other things, Wicked Data can assist the projects with more easily maintainable info boxes. So the table at the right corner, this article on Wikipedia is called an info box, which I'm sure you've seen before. Wicked Data is able to retrieve content on Wicked Data into those info boxes. And for smaller language Wikipedia like, you know, Dagband Wikipedia or Welsh Wikipedia, that readily leverages Wicked Data to see their content. And this is helpful because it helps to reduce editing workload for volunteers. Next slide. So what should you expect to see on a typical Wicked Data item? Wicked Data expresses relationships in the form of triples that use items starting with KO and property starting with P. Okay. And the item will typically be made up of at least one statement. So in this example you see on the screen, we have two statements about an entity called Douglas Adams. The first statement Douglas Adams was educated at P69 St. John's College. What this means is that this statement is qualified by Feder properties. That is the academic major, the academic degree, his time and then the end time. And qualifiers add more meaning to statements. So Wicked Data records not just statements, but also the sources. And as you can see here, this helps us to reflect the notion of verifiability on the project. So that statement, Douglas Adams was educated at St. John's College, has two open references that points to the source of that information. And the second statement, Douglas Adams, KO42, was educated at P69 Brentwood School. Only has the qualifier start time and end time. And it has no references. So a single statement consists of a property that is made up of a value, with or without a reference, or with or without qualifiers. Next slide. So a typical Wicked Data item looks like this, and you can edit by clicking on the edit button. It has a span symbol which edits next to it. As you can see, each item has a unique ID that is KO, followed by some number. In this case, the item Douglas Adams has a KO ID of KO42. And when you look at the top, there's a 10 box. We call it the 10 box at the top that contains the label in different languages. A description of the items, that is more of a short phrase telling us what the item represents. It's easier in English that Douglas Adams is an English writer and humorist. Then there's the alias next to the description, which aside from the label tells us what the item could also be known by here. Next slide. So creating a new item is as simple as going to any page on Wicked Data and clicking on create a new item. And once you click on create a new item, you get to fill in the form that is asking for a label, description, and an alias. And Kivo IDs are assigned automatically. Next slide. Next slide. Next slide please. All right. So there are tools that allow us to edit Wicked Data more efficiently and make bulk edits to Wicked Data, such as quick statements and open refine. Please go to the previous slide. Okay. Yeah. Right. So yeah, quick statements and open refine allow us to make automated edits and changes to Wicked Data. Other tools are available that allow us to visualize Wicked Data's data. Some of them enhances the user interface of Wicked Data and these could include scripts that editors can install or they could be gadgets that may be enabled in your preferences settings. Next slide. All right. So far, Mohammed told you about how we describe concepts in Wicked Data and that's what we've been doing for the first years of the project. But in 2018, we also started storing a new type of information in Wicked Data, which is lexicographical data, which is basically information about words and phrases in all kind of languages. And so you see on the left the data model that is a bit complex and that's why I'm not going to get too much into details now, but we can talk about this later. And you can see an example on the right where we basically describe the word Luftballon in German and we indicate the language, the lexical category and all kind of informations that are not about the object anymore, but actually about the word and how it's composed of two words as we like to do in German and things like this. So again, if you want to know more about this, you can have a look at lexicographical data in Wicked Data or we can talk about it together later in the questions, for example. So Wicked Data doesn't come alone. It comes with a bunch of tools that have been, some of them have been developed by the development team of Wicked Data, some of them have been developed by the community themselves in order to do things more efficiently. That can be, for example, adding data and some of the tools have already been mentioned by Mohamed. That can also be matching data with other databases, querying the data, reusing the data. There are also a bunch of tools that are about watching the data and watching its quality, watching what edits have been done recently and so on. And you can find the page that is called Wicked Data Tools on Wicked Data to discover plenty of these tools and you can, of course, create your own. So we mentioned that the goal of Wicked Data is to be reused by everyone, but do we wonder who's actually reusing the data? Well, the first re-users of Wicked Data's data is actually the Wicked Data community itself, the Wicked Data editors, because all of these items are connected. So one item can be linked from another, the content of one item can be reused on one other and so on. The Wicked Media project, such as Wikipedia, but not only Wicked Media Commons, Wikisource, almost all of the Wicked Media project at that point, we use part of the data that is coming from Wicked Data. And then we have companies from the biggest one to the small ones, because the data is in CC0, everyone can just reuse the content that they need. We have, of course, public institutions such as museums, libraries and so on. We also have journalists and, for example, data journalists. We have scientists and researchers and probably much more. And the thing is that we don't necessarily know who's reusing the data, because it's here in the open, but there are probably many usages that we don't even imagine. So if you're reusing Wicked Data or if you would like to use Wicked Data data, let us know, because we're always interested to discover more. Now the question is, how can one reuse Wicked Data? I'm going to present very quickly one of the most popular ways to query the data. I'm not going to get into details right now, because there will actually be a workshop at the conference in two days on day three about the query service. I'm going to go there and discover more about how to use it. The query service is basically a Sparkle endpoint, Sparkle being a query language where you can basically ask questions to Wicked Data and get lists or visualizations as results. For example, here's the map of the airports of the world, name after the person and the color of the dot represent the gender of the person. Or you can make a list of country flags that are including a sun, because if the data is properly modeled in Wicked Data, you're able to describe what are the different elements that compose a country flag. Or you can have this bubble chart with the occupation of accused witches, because why not? That's the kind of data we have in Wicked Data. Now there are other ways, of course, to query the data. I'm not going to get into details right now, but if you want to talk more about this, you can, for example, join the Wicked Data meetups that are going to happen tomorrow. We have dumps of the data where you can download part of all of the data in a file. We have a bunch of APIs to access the data directly from your program. And on a Wicked Media project specifically, the community developed a bunch of templates that are using Wicked Data's data using Lua. And now for something a bit different, Wicked Base, you may have heard of it and you may even have wondered, okay, what's the difference between Wicked Base and Wicked Data? Well, Wicked Base is basically the software powering Wicked Data and more precisely the Media Wicked extension that is turning Media Wicked into a database. And so Wicked Base was started to power Wicked Data, but it also started developing on its own. Wicked Data is still for now the biggest existing Wicked Base instance, but people can also install Wicked Base directly on their server and basically create their own little personal or public Wicked Data. And the development is still ongoing. There are all kind of super exciting features coming up soon. And for example, the ability to connect better Wicked Data and your own instance of Wicked Base, for example, to be able to reuse data that is already in Wicked Data and to connect it to the data that you have in your own Wicked Base. So if you're interested in Wicked Data, if you want to know more, there are a bunch of pages that you can find, there is a Help Portal, the Project Chat is the main discussion page on the Wicked where you can interact with the other editors, the community. It's super important to get in touch with them if you want to get started with Wicked Data. We also have a mailing list. We have a newsletter that is called Wicked summary that you can find on Wicked, but also if you subscribe to the mailing list, you will also receive it. And then we have some accounts on the social media, on Twitter, there is a Facebook group, there is a Telegram that is linked from the Project Chat and there is also an IRC channel. So you can basically find people from the Wicked Data community everywhere. So we are approaching the end of this session, but it's not done. We have more Wicked Data related sessions at the C3 in the Wicked Backup. So for example, tomorrow you're going to get an introduction to Wicked Data specifically for journalists and especially data journalists. Then in the afternoon we're going to have two Wicked Data meetups. The first one is going to be in German, the second one is going to be in English. So depending on your preferred language, you can attend one or the other, or both. And on day three, as I mentioned before, we're going to have a workshop to learn how to query Wicked Data's data with Sparkle. So feel free to have a look and check them also in the main schedule of Wicked Backup. Thank you very much for attending this session. These are our contact details if you want to contact us. And of course, you can now ask questions as we mentioned in the chat or with the hashtag. And we will be very happy to answer all of your questions right now. Thank you for your input and the overview about Wicked Data. There has been a few questions already answered by Jol in the IOC channel. One was about the big dump of scholarly data and what scholarly data is and how this came to be in Wicked Data. But there is one more question from the chat right now. Tilla asks, can I add new types of data that are not yet tracked in Wicked Data? So I'm wondering what do you mean exactly by type of data? Maybe you can give a bit more details because that can mean a lot of things. The data model of Wicked Data is very flexible and it's absolutely not set in stone. Every week the community comes up with some new ways to describe things. Sometimes we realize that there is an area of the world that we completely forgot to cover and then we create new properties to describe, for example, a certain type of, I don't know, of concept, a certain type of building or objects that we or philosophical concept that we didn't describe yet. So this is always in movement and in action. When it comes to what we actually call data types, which is, for example, a string of text or a date or a picture, we have all kinds of data types like this. This is a bit more complicated and overall it's quite rare that we add a new data type and it needs a strong use case so we add that to the software. I hope that it answered your question and if I didn't feel free to ask again. We've got a feedback. The example Till meant was there's an organization or a project called Parliament Watch in Germany. There was one talk earlier today where they tried to track and scrape and analyze the parliamentary protocols and one big issue they had was with structural data about all the members of parliament and how they are organized and stuff like that. If I remember correctly, there actually was a project that tried to include the structural data of members of parliament in WikiData if I'm not mistaken. Absolutely. It's a Wiki project that is called something politicians, all politicians. I don't remember the exact name right now. But indeed, some people are already working on memberly of parliament and political people in general. It's very likely that there is already a way to structure the data. The best way is to contact the people directly involved on this Wiki project. Wiki projects, by the way, are pages where basically people who have a specific topic of interest gather and can discuss about the specific questions about this topic. Have a look at this project about politics and try to see if anything is missing. But generally, WikiData definitely welcome information about politicians, about member of parliament, this kind of stuff. What we do not do, however, is store the full documents, for example, in that case, the reports or the documents that belongs elsewhere, maybe on Wikimedia Commons, for example, if it's possible, if the license allows it. But on WikiData, we'll be happy to store the metadata about them. All right. You will just post a link to the Wiki project, every politician. If anybody looks for every politician on WikiData, they will find the project. The bottom line is pretty much anything is possible in WikiData. Right? Yeah. Thank you, Jules. And hi. Almost everything. So, on WikiData, just like on Wikipedia, we still have some criteria to define what can get in WikiData and what not. Because we are aware that this knowledge base, it needs to stay quite general and it cannot contain absolutely everything. For example, the community decided a while ago that they would not create one item for each human living or who used to live on Earth. That's just not possible. So there are some notability criteria that you can find in the help pages. And I would say that the level of how fine-grained data should be has to be discussed with the community. And the good thing about having WikiBase also available as a separate instance of WikiData is that if some people want to work on a topic where they have some information that is very, very specific and would maybe not fit the scope of WikiData, they can create their own WikiBase and then they can connect the content with what is already in WikiData. So altogether in this WikiBase ecosystem, yes, pretty much everything is possible. Well, the future is certainly here, at least with WikiData. Thank you again, Lea and Mohamed, for your insightful introduction to WikiData. And we're looking forward to more people joining you in your efforts. Thanks for your presentation. Thank you. See you soon. Thank you. Bye. Bye. Bye. Bye. Bye.
|
You certainly know about Wikipedia, but do you know that the free knowledge base Wikidata is gathering a lot of information in open data, collected and organized by a community of contributors? Let's discover more about Wikidata, how it works, how the data is structured, and how you can contribute!
|
10.5446/52311 (DOI)
|
Good morning and welcome to my second talk on Flutter. My name is the one with the braid and in today's talk, I will focus on animations and on the rendering of Flutter's graphics library and background. Okay, let's begin. Yeah, I will talk about motion. Motion is part of user experience, so it's not only a technical talk, but also a talk on how motion and how app design affects the user. Yeah, and that already points to the first question, why do we need animation? There are different reasons. Sometimes we want to emphasize something in regard of hierarchy. For a moment, I will slow down the animation. Yeah, here you can see this pop-up, this card which or this list tile which expands and the box being created from this list tile shows that the list tile actually contains whatever is shown afterwards. This is a kind of hierarchy shown by the animation. Another common use case is indication of status. Here you see a kind of progress indicator, these list tiles glowing, and later these messages or whatever appearing. That indicates status of an application. It indicates for the user that it's not ready yet, that they should wait until everything is loaded and so on. A third part is feedback. Here you can see a card being dragged around and other cards correspondingly moving. That shows the users that their actions are successful, that whatever they perform is a valid action in the app. Another point of animation is user education. When you tap the lock screen, it does not work. If you swipe up, it works. This animation, this bouncing at the bottom, indicates that the user should try to swipe upward to unlock. Let's talk about Flutter and animation. All these animations are possible and Flutter, the question is only how? The Flutter documentation has a quite filled overview of animations and Flutter, and I will try to break it down. That's the guide on how to use animations provided by Flutter. Even if we zoom in, it is not that helpful. Let's go through these animations step by step. First of all, we have implicit animation. An implicit animation is quite tiny, but useful animation. It consists of three parts, a value which is being animated, maybe an integer, I don't know, a color, size, whatever. A duration, so the amount of time the animation takes, and a curve in which the animation is being performed. I found a nice graphic showing these curves, but first of all, they are easy to implement and do not require any advanced knowledge of Flutter nor any mathematics. Here you can see a couple of cores provided in Flutter. Of course, you could create your curves on your own as well. You have ease curves, so this and accelerating curves. You have exponential acceleration and deceleration. You have bouncing animations. You have in-art easing and elastic animation, so you have pretty much everything you can imagine of. They are built in and accessible via a name of the curve. An example of an implicit animation is an animated container. An animated container is a widget which simply animates all the values it is being given. In the code sample, I'm unsure whether you can see my mouse here. At least you should be able to see the selection. You see it checks whether this selected value is or this boolean is false or true, and correspondingly sets the size and the color of the container. It's being given a duration of two seconds and a curve. Afterwards, it just contains a child widget. As soon as the containing widget should change the selected boolean and should trigger a rebuild of this stateful widget, then all these values here, the alignment, the color, the height, the width, they are all being animated using this curve and the duration. It's a very simple animation but is useful in many cases. Another kind of implicit animation is a so-called tween builder. On builders and animation builders, we will talk later. Here, a quick introduction, a tween builder consists of a tree. A scalar, an amount of values in which the animation takes place, a duration and a builder. A builder which is given the value of the tween. Based on the value, it can change itself. In this case, we have the angle set to the tween. As soon as the tween changes, so every frame, this is rebuilt using the new angle. That's another easy way of animation in Flutter. A more complex example is the explicit animation. Of course, motion from reality, everything we can see consists of thousands of frames. Human eye is able to realize 30 or even more frames per second, but modern devices feature 60 or 120 frames per second and high-end devices even more. The explicit animation allows painting every single frame the application renders. With an explicit animation, you can set any single of these frames. It's used for complex animations for which no built-in widgets are available. If they are not correctly implemented, they have a high performance impact, so use with caution. What's the recipe for an explicit animation? You take a stateful widget. You mix in a single ticker provider state, mix in, use an animation controller, sync it to the single ticker provider, add a listener, and on every listener of the controller, you set the state. Pretty much as soon as the state is set, you set a new state. Here you have created a widget state with this single ticker provider state mix in. We have an animation controller and we put the vSync on our single ticker provider mix in. As we listen to updates of the animation controller, so it's on high-end devices, it's 120 times per second. In this update method, we set the state and simply rebuild the widget. Then we can do whatever we want with the value we animate. Please pay attention on disposal. The animation controller exists as long as it is not or until it is deposed. It means as soon as you remove the widget from screen or if you move to another page of your application, the animation controller would actually continue to animate itself. You explicitly have to dispose it. Here I provide an example where it's simply forwarding infinitely. That's the very low-level animation that's actually under the hood how all the animations built in in Flutter work. The animation builder and all the different animation builders make use of this. Animation builders are widgets which provide exactly what we were just talking on in explicit animations. That's what they provide under the hood and provide a builder with the current value of the animation controller. That's much easier to use and why should you use an explicit animation if you can't use an animated builder? Let's have a look at one. For example, here we have an animation controller consisting for 10 seconds repeating infinitely. Based on the value of the animation controller, which is part of our widget, we have an animation builder which is building based on the value of our animation controller. In our case, we simply rotate a box, whatever it may be. That was quite hard and in most cases you do not need to use these difficult animations. Usually simple animations like hero animation are sufficient. What is a hero animation? A hero animation is a widget. You can place around any other widget. For example, if we have an image, we place a hero around it, give it a certain tag. A tag can be any object. It could be a string, an integer, a widget, whatever. If we place a hero around a widget, perform a page route. We move to another page in our application and use the same hero again. A hero with the same tag and any child should be a similar looking child. The hero simply moves from its first location to its second location, zooms to the corresponding location. That is very useful for simple animations. You know it from floating action buttons, for example, or from any image gallery. If you tap an image, it zooms in and it is not such an animation you think. You would never fade an image from the bottom to the top as a normal page transition. You simply zoom in the image if you would surround an image with a hero and move to another page containing the same image. It is useful for simple animations and it is very easy to implement. Here I found a graphic explaining how a hero works. We have our source hero, a big image, for example, in the middle of the screen. During the page route, the hero appears again. In our destination, it is at the top on the left side. Hence, the hero will zoom in and move its position to the destination position. That is simply how you would implement a hero. We have, for example, a container containing something put in hero inside on another page we move to. We implement a card or whatever containing the same hero tag and then the contents of the hero will fade to the destination hero. Please pay attention not to use the same hero tag on two heroes on a page. Otherwise, you will run into trouble because Flutter won't be able to decide which hero to move on. If you put several floating action buttons on one screen, you will run into trouble because Flutter tries to animate to both floating action buttons and will cause errors. That was the easy-peasy part. Let's talk about more complex transitions. For these complex transitions, Flutter provides an animations package. It is not part of the Flutter framework because it is not an everyday use case. That is why it was moved to an external package, but it is an original package from Flutter.dev. The animations package contains pre-built animations for common cases. I will go through these cases here. It is used for UI transitions, so complex transitions, and it is comparatively easy to implement. Here we have the so-called container transform. You can see several containers which are expanding in some way. For example, here the floating action button, which is being transformed into a card or a page creating whatever. Here we have a card expanding and showing more detail. That is a common use case for container transforms. Here I have a code example. It is actually quite easy to implement a container transition. It is called an open container in the animation package. You have a builder for the closed state and for the open state. As soon as it is being tapped, it will simply render the transition from the original. It will expand first of all to a full page container and build the destination content. As soon as you tap back, it builds the closed state again. Another common use case is an access transition from in-screens or coding guides. I do not know whether you know Google's code labs or so on. They provide an access at one of their accesses, which you have the second picture here. You have numbers of steps and you have pretty animations. With Flutter, you can code pretty animations on these accesses. There are two types of these access transitions. There are shared access page transition builder. Well, actually it worked. The shared access page transition builder is a page transition. You can actually provide two pages in your Flutter application. The animations package with this shared access transition builder will take care of the transition between those two pages. It is actually a material page route and that is another kind of page transition. The other one is a simple widget containing several other widgets through which one of these access transitions is being performed. An access transition can take place on the X, the epsilon, and the Z axis. For example, here you have actually all the different accesses here, here, horizontal, here, vertical, and here through the Z axis. Another very commonly used animation is an animated icon. Animated icons in Flutter are pre-built animations for icons. For example, a play button in a music player which translates into a pulse button here. It looks very, very easy to implement and emphasizes the user what their actions have affected. Here are some examples, different icons, and how they could behave when you tap them. There are more complex examples as well. How to do it? Well, it is easy peasy. Just use an animated icon. You provide animated icons, source, look at the API docs which are available, but it could be used for example for a drawer to show a back button as soon as the drawer is open. Another question, I said Flutter renders with up to 120 frames per second and in every single frame you can render the whole widget again. How does it fucking work? Let's have a look at the engine, especially at the rendering of the Flutter engine. First of all, we will look at how Flutter works under the hood. We have our widgets, we have animation, that's what we just talked about. We have widgets of different kinds, we have gestures, we have painting, maybe you already used a custom paint. That's all the framework Flutter is written in Dart. Now the question is, we know Dart has a set state and on set state something happens. But what happens? That's what the engine does under the hood. The engine is written in C++ and provides all the APIs for Skiya, the rendering library, Dart and all the text stuff like accessibility, right to left, left to right and so on. Let's have a look on how animation works in Flutter. We have the GPU which provides the sync. We sync is used to sync the animation to the rendering of the application. Otherwise, we would not do this. We simply could not, but what would mean is that the animation would not be synced to the updates of the user interface would be a mess. I don't know, you know, for example, if you film with 30 FPS, a monitor with 25 FPS, you will see some ugly rendering and some issues and so on. That would theoretically happen if Flutter would not require a v-sync. Okay, we animate, we tick animations. It means we were talking about this ticket provider. It ticks every time the GPU is rendering. That causes our widget to rebuild the parts which changed. Afterwards, the engine performs the layout. It computes the size of the widgets and so on. As soon as everything is properly rendered, the frame is recorded by Skiya and is painted to our canvas. Yeah, and that's an important, it's not really about animations, but the point to be mentioned. That's what the major difference between state less and state full widgets is. Stateless widget has a constructor and is being built, but it does not matter in which build context. And a state full widget in opposite creates its state for each new build context. But it does not rebuild the whole widget. So even if our parenting widget changed, if we provide the same state full widget in our changed parenting widget, it would still have its state and would not be rebuilt from scratch. Okay, what does it mean? If we have our state, we first initialize our state. That's where you, for example, the animations, that's where you would declare the animations controller. In other words, the first build is being triggered. And for example, if nothing happens, as soon as our parenting widgets decides no longer to show our widget, our widget is being deposed. But a state full widget is able or state is able to set its own state to trigger a rebuild. Or if the parenting widget changes, it is being rebuilt as well, but it's not being re-initialized. It's only being rebuilt. And that's the major difference. And by this entanglement of so many widgets, stateful and stateless, on one rendering of our graphics library, on one of these 120 frames per second, only these tiny parts of the applications, which really changed, only they are being rebuilt. And that's why Flatter has such an incredible performance in animations and layout, and works even on low-level devices with up to 120 frames per second. At that point, I would like to stop this talk. I hope I could give you a good impression on animations in Flatter and how rendering of the engine works in Flatter. And I would now like you to enjoy the Q&A and the rest of the remote chaos experience. Thank you. Thank you very much for the session about Flatter. The thing is, one question I had is that Flatter was developed by Google, so not everybody likes Google, so what would you say to people who are wary because of that? Me neither. I actually avoid anything from Google, but Flatter is open source. And so I see no issues in using Flatter, especially because it's powered by a great community. There's no requirement for any proprietary servers, and so I see no issues in there. You can include dependencies from any package server for Dart, you want. So yeah, it's actually quite useful. It's a BSD3 clause licensed, so it is as free as you can expect it. Thank you very much. Once again, whoever's watching us and has a question now, now would be the best moment to put your question into the IRC or on Twitter or Macedon, so we can read it and answer it now. So until these questions, maybe another one rolls in, can you give us a few examples? We have a new question, but I have another question first, sorry. So give us a few examples of what you have built in the last month with Flatter, because I know you've been developing with Flatter for some time now. Yeah, currently working on an application called XORNAL++, which is a mobile port of the known note-taking software XORNAL++, which I'm currently porting to Android and iOS together with the original maintainers of the Linux and Windows application. Oh, that sounds interesting. Thank you very much. So we now head over to one question. Lin Mop on IRC has a question I'll read it off. What's the progress of the port of Flatter to ARM Linux for devices, for example, like the Pinephone? Do you know about that? I recently read on GitHub about this. There is work in progress. I think the Flatter tool is now capable to run on Linux for ARM, but it's not possible yet to build release mode applications on neither on ARM nor for ARM, but they are working on it. Okay, so we don't know at the time being when this is going to be done. No. Okay. Thank you very much. How can people learn more about that or can they contact you if they want and if you have questions afterwards, maybe? I'm available here in VikiPaka or here. I guess there's a Q&A session in the blue button later, which you can contact me. Otherwise, just search for the one with the braid in the Internet. You will find possibilities to contact me. I'll post the link to further Q&A in the RSC if anybody wants to use that. Thank you for the question, Lin-Mop. Thank you for your talk, the one with the braid. I think we don't have another question at the moment right now, but we recorded this for posterity. If anybody sees this later, they will contact you, I guess. Thank you so much for your contribution. See you again in the program tomorrow. Yeah, goodbye. See you tomorrow.
|
None After the introduction info Flutter, we will have a closer look on some advanced features of the cross-platform software development kit Flutter. In this talk, we will focus on animations, state management as well as localization, native platform interfaces and the underlying Flutter engine. Note: You should have basic knowledge of Flutter. Please refer to media.ccc.de for an introduction into Flutter.
|
10.5446/52312 (DOI)
|
Hello and welcome to this first talk. Today I'm going to get into cross-platform development using Flutter. What is Flutter? Flutter is a cross-platform development kit. Here, a fancy logo you can see over there. We will talk about how to install Flutter. We will talk about the special features of Flutter, hence widgets and we'll have a look on their plugins. We will have a look on two different kinds of these widgets, stateful and stateless widgets. At the end, we will talk about the main feature of Flutter as part of the code. You do not have separate layout files. How would you install Flutter? Well, if you are used to Git, it's actually quite easy. You simply clone their Git repository, update the path and you have the Flutter tool installed. That installs two things, the library Flutter and the programming language Dart. Dart is the programming language you use with Flutter. Of course, you could use Dart without Flutter, but Dart is usually used with Flutter and Flutter only works with Dart. If you are not interested in cloning Git repositories, if you are not that used to comment prompts, you could easily install Flutter using the plug-in of your development environment, for example, Visual Studio Code or Android Studio, so IntelliJ, they offer very user-friendly plug-ins with a quick installation guide for Flutter automating all these steps. What is Flutter like? If we have a look on Flutter, we talk about different things. We have the framework, written and Dart, we have the engine and we have platform-specific code. Flutter consists of something called the Flutter tool, that's not listed in the graphic you can see there. That's what you use to create an application. For example, if you type Flutter, create my new application in the comment prompt, that's the Flutter tool you use. But as soon as you run an application, you have this, it works the way the graphic presents it. You have this framework consisting of everything you can see and everything you can do. So you have buttons, for example, material buttons, these are the two main theme styles. So material is the Android or Chrome OS style and Copatino is the iOS style user interface. The framework takes also care of the rendering, animations, interactions with users. So if you tap a button or if you move around something on the UI, that's something the framework takes care of. And under the framework, there's the engine. The engine operates everything which is not specific to your application. So the general stuff of Flutter, it takes care of the interaction with the Dart virtual machine, it takes care of platform channels, for example, if you want to access native code, it takes care of accessibility, it interacts with the operating system and so on. And beside of those two, there's still the Embedder. The Embedder is what is, yeah, typical for one kind of device, for example, or on platform, for example, for Android. The Embedder takes care of threads of process management, takes care of the event loop of the operating system, and it takes care of interaction with native plug-ins. And most important, it's responsible for packing the application. For example, if you have raw Dart code, no device would be able to execute it. So the Embedder is responsible for packing this code into an executable on Windows, into a JavaScript file on the web, or into an APK file on Android. Okay. Well, now I already introduced these widgets. I talked about Material and Copatino widgets. But what is a widget? Yeah, a widget is pretty much everything you can see in a Flutter app. A widget is any user interface element, sometimes allowing interaction, sometimes not. But everything you can see in an application is called a widget. You can imagine a widget like, for example, HTML elements. You simply put them into each other and create a document tree. But unlike if you use HTML, you do not have HTML for the layout, CSS for the style, and JavaScript for the interaction. If you have Flutter, these widgets provide all these three parts. So the widget performs the layout, the widget offers style, and offers interaction with the user. Hence, you do not have any separation between style and content of the application. That's a very good feature for development and makes many things such as refactoring code. But there are different types of less widgets without any kind of feedback they can provide. They are once rendered, and afterwards they are present. Or if the parenting widget decides, well, I no longer want to show this, for example, text, that it's just being removed without any interaction of this widget. Another point are stateful widgets. They allow interaction. So for example, if you have a text as a stateful widget, it is able to tell the application after a couple of seconds, now I want to change my own color, or I want to change my font size. So it has an event loop and can decide based on things happening inside this widget. That's usually a bit, yeah, not these low-level widgets like text, but more these high-level widgets like list views consisting of several children and so on, menus. They consist of a menu button and drop down menu and whatever, or even the whole page of an application. All these are widgets, stateful widgets. OK, time to provide some code samples. That was a bit of a introduction into the architecture. And let's have a look on code. Well, congratulations. That's a simple Flutter program. If you write it and you provide a declaration of homepage, you should be able to run an application on your mobile phone. Yeah, what does it? It executes a main function calling a method, calling a function called runApp, which runs a material app. So following the material design from Android or Chrome OS. OK, but of course, we need to implement a homepage. Well, let's have a look at a bit more difficult widget. OK. I'll tell the widget everything it needs to know for building. In our case, we simply return a list style consisting of an icon and an outline button. And the outline button can do anything. It can share a text. So you would see a share prompt on your mobile phone, or on the web, it would download the text. OK. But why is it stateless and not stateful? Simply because it cannot interact with itself. The widget is unable to change one of its variables. The widget cannot set a timer. It simply could not. If you would tell the widget, well, wait five seconds and do whatever, it would not change the appearance of the widget because it is once built. And afterwards, it has no longer has the ability to change its appearance or behavior. Only the parenting widget. So for example, the list we put in this score detail inside, it could trigger a rebuild of this widget, but not the widget itself. To clarify this point, we will have a look at a stateful, widget. It's a bit short because a stateful widget consists of two classes, state class, and that's what you can see over there. And, well, the actual declaration that it is a widget. But the state is much more interesting if we look at it. OK. You first see there are, we first initialize some variables. Afterwards, we have a method called init state. That's something which is being triggered the first time the widget is built. Afterwards, we declare another method and at the end, we have our build method. Yeah. What does, or what's the difference from this build method to the build method we had in our state loss widget? I hope you can see my pointer. Yeah. We have here, we have an if statement here, a short if statement. So the build method checks whether a variable called test-loaded, that's being declared at the top here, whether it is false or true. And it correspondingly reacts. So if it's true, a list view is being displayed and otherwise a progress indicator is being shown. OK. But, well, that's something we could still implement in a state loss widget. But there's another big difference here. We have something which changed something as soon as something happens. Well, many some things. It's an expansion tile, so a list tile which can be expanded. It's a built-in widget of Flutter. And as soon as it is being opened, a local method is triggered. Here we have this LordScore method and that is being triggered. We do not know what it does, but I can tell you, well, it will load some data from where so ever and it will change this variable. So afterwards, after this method is being triggered, the test data here will be something different. It will no longer show the progress, but it will show this inside a single widget without any communication, without any external stuff, without any JavaScript get element by ID or something like that, the widget simply decides on its own behavior. That's very comfortable, believe me. OK. Now I already talked a bit on JavaScript. It's somehow different. Well, Flutter is often being compared to JavaScript using React Native on and Elixir on. So what's the difference? Well, let's first look on JavaScript. If you write an application in JavaScript, you actually have JavaScript and JavaScript is a web language. How does you need a web view or something similar to render anything off your app? That means it consumes an immense amount of memory and CPU power because, well, if you ever used Chromium or Firefox on a low-end device, you know that JavaScript can be, well, quite painful. And, well, there are high-end mobile devices, but if you develop an app, you should always keep in mind that there are mobile devices with much less power and less than 2 gigabytes of RAM. OK. And if you have Flutter in opposite, you create a native app. You have native code, which is being executed beside the Dart virtual machine with almost the same look and feel, you know, from your platform. If you have JavaScript in opposite, you usually have fancy design you made, which is actually good for web development, but it's usually not exactly the design packed from a mobile device. There are very strict guidelines if you ask Apple or if you ever published an app to the App Store, you know there are very strict guidelines at Apple and at Google, these are guidelines as well, but they're not that strict. But if you use Flutter, you automatically obey these guidelines and produce apps with a native look and feel. And another advantage of Flutter, it's more an advantage in comparison to native apps, you have the same layer, then the same code on all your platforms. Yeah, because if you write native applications, well, you have two code bases and the applications behave differently on all platforms. And if you have Flutter, you have one code base for all your platforms and obviously it behaves the same way on all platforms. That's much easier for your users if she should ever change their device. Yeah, and the major point I already mentioned at the first point, there's almost no loss of performance. Yeah, so Flutter is actually a very good framework for creating apps for Android, iOS, desktops such as Windows, Mac OS or Linux, free SDSD is unfortunately not supported, or even web pages. Okay. Yeah, at that point I want to thank you for your, for the attention of this talk. Feel free to attend my next talk on Flutter. Tomorrow I will give an advanced view on cross-platform development using Flutter. We will focus on animations and the way Flutter works under the hood. Now there should be an online Q&A. Thank you for your attention. Hello, this was the lecture by the one with the braid about Flutter and we are now switching to a small Q&A session here. There's been exactly one question in the IOC. You can ask questions via the hashtag RC3Rikipaker and in the RC3Rikipaker IOC channel on Hackend. There's been one question which is, what is the main feature of Flutter which lets me decide for it instead of example, for example React Native? Could you answer that question? The one with the braid, we've got problems with your sound. We can't receive you via Ninja only via our back channel. And now they're gone. Here we are again. The question was, what should convince someone to use Flutter? I would say the main advantage of Flutter is the performance and the native-like applications you get. If you use Flutter, you get native design of the operating system you run on and you have no lack of performance. That's the main difference to JavaScript, for example, so React Native. Would you consider yourself to be a Flutter fan or aficionado? Yeah, I'm a huge fan of Flutter. Okay, we can tell that. You do have other talks about Flutter the coming days, don't you? Yes, tomorrow at, I think, 12 o'clock, there's a second talk on Flutter, advanced cross-platform development using Flutter. We will focus on animations and on the way the engine, so the underlying engine of Flutter works. Alrighty, there's been another question in the meantime here, again, by Hans-Wosthenn. React Native also gives you native components and design, et cetera. Isn't that true? Well, I would call the Flutter components more native. They are built 100% according to the style guidelines of the operating systems. If you use material buttons, they are 100% material, so as you know them from your Android phone, for example, and I noticed in React Native, you sometimes have issues with, or not issues, but some components do not properly look exactly the way they should look and they often do not look the way the users expect them to look. Alrighty, thanks for the answers to the questions. There have been some more detailed questions as a follow-up on the IRC, but I've posted in the IRC a link where you can all join in for a little bit of a button session where you can go into more detailed exchange. The one with the bridge, thank you so much for your input. This has been the first broadcast of the day and of RC3, and we will continue to follow up with a little break and continue our program at 1600 Central European Time. Thank you. Okay, see you. Music
|
Did you ever want to develop cross-platform application from one single code base? Are you afraid of getting worse performance than with native code? Flutter may help you! Flutter is an open-source cross-platform software development kit which allows you to create native applications. As is does not use JavaScript it won't require a hole WebView in the background. Due to this and the incredible easy syntax it is one most advanced frameworks for applications running on mobile devices, on the web and on desktops. In this talk, I will introduce the basics of Flutter and its programming language Dart, the differences to other frameworks and some details about Flutter's structure.
|
10.5446/52314 (DOI)
|
So we are live on air for a very special episode of the Illinois on Air podcast. It's a podcast on organizational learning and knowledge work and it's a very special episode because it's live streamed from the first center center, which is the podcasting assembly of the remote chaos experience of the Chaos Computer Club. And it's also special because I have two very special guests. One is Claudia or Ching's. Hi, Claudia. Hello. And the other is David from TCM. I think in France. Hi, David. Welcome. Exactly. Hi, everybody. There are a few specialties in this podcast. First is that it's a virtual one. Normally at the center center we have a physical stage where we podcast. We would sit there on chairs and have headsets on. But now we do it remotely. This means that you can see the live stream that is brought to you by the very, very fabulous C3 walk that's streaming. You also have a chat on the IRC. There's this on hackint.org, a channel called RC3-SendItCentrum. You can ask questions there. We do a Q&A at the end. And as we are a podcasting assembly, you will be also able to lifestyle in with a phone number. We will put the telephone number in the chat later on. And then you can come up to the show and ask questions to us to Claudia or David. Another thing because this podcast is special is because we have this very special guest. We want to talk about today about a tool that enables people to meet in the virtual space. I think due to COVID-19 pandemic, everybody was used to hang in video conferences like Jitsie, like Big Blue Button and Zoom and so on. I see Claudia as an audience and David as well. But perhaps you experience that when you're in this video conference, there's something missing if you compare it to a live conference or live event. People having a coffee together. You just stumble upon each other. You have a chat just by chance. So serendipity, you meet each other. You meet new people. And that's not happening in a video conference. But it's happening in tools that we call spatial chat tools, like tools where you can chat and be in the room somehow. And we have these two people here because they have a lot of experience, a lot of in terms of a few months or weeks since we are in this pandemic situation. And perhaps first I switch to David. I read a little bit in the internet about you. You come from a company called TTM, the coding machine. You do stuff with open source web development, a lot of open repositories on GitHub. Do all this web stuff and you're based in Paris, I think, in France? Yes, exactly. I'm actually the CTO of the coding machine. So we run 90 company. I co-founded the company like 15 years ago. So it's been quite a while. And we've been doing web development, actually, a lot of PHP. I'm a really big fan of PHP. I've been doing a lot of open source libraries and frameworks around it. I've even participated in a few standards regarding interoperability of PHP frameworks. So this is where I come from and I'm actually really kind of a nerd. I've been starting coding like when I was seven years old. And I've always had, well, I've been interested into developing video games when I was young. So web video games and you see where this leads us. This is what we will talk about later on. Very good. Yes. Thank you. And then we have Jinx from the Chaos Computer Club. I read also a little bit about you. As you said, you were part of the main organization team of this whole chaos, which normally is a conference last year took place in Leipzig at the Messe, at the fair there with, I think, around 17,000 people. You wrote about yourself that you're an author, a podcast, of course, and also a specialist in data protection. So what else do we need to know about you for this podcast today? Yeah, this year I'm first time in the main organization of this RC3 remote chaos experience. We sat in the beginning when we started to develop the ideas for the whole conference that we can't, we just can't take the Congress as we know it and put it just one-to-one into the digital world. This just can't happen. And the thing that we always had in mind when we started writing that concept, which was a colleague, it was Knud and me who wrote the concept for the whole case experience. And we always had in mind something like a four-day LAN party, a local area network party as we did in our students' times, where you get out after four days with a whole pizza overflow and square eyes and total sleep deprivation and come home and say, okay, that was really great. So when we started this concept, we tried to see it from another perspective. So what can we actually make happen? And what can't we do? So this was a really interesting thing because what I did in the last years was mostly helping with assemblies, with the assembly team at the Congress or at the camp with the village team or I was with the privacy week, which is also a lot smaller conference but also physical one that's over seven days. So I know a bit about all those things, all those conferences and how they're working in the physical space. And yeah, interesting part was how to make something happen in the digital space that's also great. Yeah, and I think we have to talk later a little bit about what this AC3 world is and what the components are. But perhaps first of all, let's talk a little bit about the tool like you called it, David, Workadventure. Workadventure, the description of the GitHub repo is one of more than 300 that you have is it's a collaborative web application in brackets virtual office presented as 16-bit RPG video game. As I saw it was started in late March, has more than 1200 commits at the moment and you released the version one like 10 or 11 days ago with about 12 contributors. So tell us a little bit about the emergence of the tool, like how did the idea come to build that tool? How did you do it? What are the technical components? What is Workadventure? Well, as you said, it started in March and well, I'm in France and in France in March we had the lockdown and during two months we could not go out, we could not see our colleagues. It was quite stressful actually and so at the Zocoding Machine we started to do a brainstorming about what could we do to improve the situation with our skill set, so as web developers. So we started thinking about could we do a phone application to detect nearby people that have COVID. Well, this has been done by other people, so we would certainly not do that, so we were really looking for an idea and at the same time I started doing some conferences as I told you I'm mostly into PHP and I've been developing a library which allows to do some GraphQL in PHP, whatever. And I was doing a live conference about that online and during this conference I was presenting, of course I was not seeing the audience in front of me and at the end of the conference, well, I could not see anybody, I could not talk to anybody and it was really like, wait, how many people were there, was there happy, I did not have any feedback and I needed to find a way to solve that. So you just looked at the dead eye of the webcam so to say and after the talk it was just over. Exactly. And basically I wanted to have a way to have a conversation with a subset of people, of course it's impossible to have a conversation with 100 people at the same time, but I wanted to have a conversation with a few people and maybe a few days later I started to say, okay, this would be really useful also for my office because I want to see my colleagues, I'm spending all my days scheduling new meetings and I'm spending like maybe half my days sending links to speak with people by creating whatever Zoom conferences, Google Meet, Jitsi Meet. And basically the idea was born this way. I started thinking, okay, what if I could basically walk through my office as if it was a virtual, well, as if it was a video game, I wanted to come next to somebody and start being able to start speaking to it like that without having to schedule anything, without having to plan a meeting, just I'm moving towards someone and I can speak to it in an informal way. I wanted to get rid of the formal, I'm sending you an invite, we are going to put something in the calendar or whatever. And this is how WorkAdventure was born and I started speaking about this idea to colleagues and they were immediately like, of course, it's a good idea. And so we started working on it, well, mostly at night. It was like... So beside your day work, you didn't do it as your day work? Yes, at the very beginning it was really beside my day work. It was with a few colleagues of mine, we were three or four. And we started hacking a bunch of technologies together and well, basically I had been playing with all the technologies needed in the past. But it was just a matter of putting them together in the right way. As a child I had been playing a lot with game engines, so developing a video game was one of my favorite occupation when I was young, so I was really more than happy to start a game. And then we needed the video chat part and we had been working a few weeks before on a project which was using WebRTC, which is basically the API that is used by browsers to speak together and to allow video communication between browsers. And so, yes, it started like that. Yeah, cool. Cool. And Claud, did you see this effect in the RCC world, like stumbling across people, meeting people, just walking through this world and so on? In contrast to just having video conferences like Zoom or Chitzy or something like that? Really yes, and this is the reason why we chose to implement it. We had a look at a whole bunch of different solutions, also like Mozilla Hubs and so. And most of them had the problem that they were really, really resource hungry and we said, okay, we need something that people can also use with a bad bandwidth with an old computer or with not the latest gaming hardware. And so we chose, the first thing was, okay, no 3D because when people are sitting at home or probably with their families somewhere in the wild with 4M bit or something, they just can't participate. We can't do that. We need something that's smaller, that's better to handle also with small bandwidth or old hardware and so on. So if you're at your parents' house currently with the 4M bit bandwidth connection, then you can still participate. Okay. That was the first thing. And then we said, okay, 2D it is. And then we had a look into several solutions. And our technical guys came down to, okay, we'll look deeper into that work advantage solution. They were debating on implementing it themselves again, but just building from what's already there. And then, yeah, it is 2020 and chaos is chaos. And yeah, building from scratch was not a solution at all at some point. And then we came down to, okay, we just take that work advantage stuff and build from there. Because it was also, and this is second part, pretty easy to build the tiles for the 2D world, the graphical pixel tiles. And we had the idea, okay, if we hand out a how-to for those graphical pixel tiles, then all the assemblies, all the people who want to participate will have something to do over the next weeks to get enthusiastic about the event. And people can build something and can participate already in the build-up that took weeks. And it was so cool to see how that actually worked out. Yes. So that's why we chose work advantage then. Yeah. Yeah, that's also the interesting thing. As you said, there are like this lot of components that came together. I think you used the gaming engine called Phaser.io, I've already read it, right? Yes, exactly. And you used an existing editor tile where you can build these maps and create layers and add smart features like having websites pop up if you go somewhere or starting Chitzi conferences if you need to have a touch-hat with more than four people and so on. So how came all these things together? What is technology that you worked with in the past and just brought that together or was there also research necessary of what components to use for the whole system? Actually I've been testing two technologies for the front-end. I was used to using game engines but not in the browser. So this was quite new for me. And I started with an open source library called Pixi, which can be used to render graphics on screen in a con fast. And it was quite cool but it lacked a few features, especially the ability to load a map from Tiled, which is the tile editor that you've been using to build the maps. And since there was this super tool called Tiled and I wanted to be able to load the maps easily, I realized that there was this other game engine named Phaser that can load the tiles, the map quite easily from Tiled. And so I used that. I followed a few tutorials and it was, I won't say easy but relatively easy to set up. And so we sticked with Phaser, which is really quite a nice tool actually. And yeah, sorry. One fun thing about Tiled is that John, which is a guy who wrote Tiled, just connected on WorkAdventure like a week ago and came say, hi, I made Tiled and you're using it. And I was, oh, super. This was also the bridge how I came to WorkAdventure. I used it the last six months, a lot of tools like SoCoCo or Rimo or Gether or Wanda. And I think Gether is also a supporter of Tiled because he is also Tiled as an editor. And then I read somewhere there in the forum, there's also this WorkAdventure thing. And what I found interesting that you just put the whole SoCo code on GitHub and you have a description like running a compost.up and have a Docker container and have the whole thing running. And that's, I think, very interesting because a lot of people can build stuff around. As you said, Cloud, they're like doing graphics, doing tilesets, there are a lot of repos, links to free repos with tiles and so on, but also build them from scratch. There was, I think, organized by Honky every week on Tuesdays or something like that workshop on how to use Tiled and how to use Cleta, which is a painting program, to create own tiles. And so there are a lot of custom tiles in this RC3 world at the moment. It was a very wise decision, I think. I was so much surprised when I learned that you were giving workshop about how to build a map and they had absolutely no idea. What? Yeah, because there are a few events actually. The physical Congress, they're very typical things. Like there is a Seidenstraße, which is a postal system and there are specific signs and unicorns and so on. They were not available in the standard like game tile libraries. And I think also that people were really, really creative with the stuff they did. The virtual Congress looks a lot like the physical one. You have a lot of jokes and Easter eggs and so on. It's very cool. Yeah, so many creative people. And I heard about assemblies who build an underwater world somewhere and I didn't find it yet. Or a Tetris room. I think it's at the Hexen. Okay, I'll have to watch over there. The Tetris room is when you walk through or when you run through it, you hit the tiles and the place, the Tetris melody and other really cool stuff. People really got creative, which is the beautiful part of 2020 that people try to make the best of what we can actually have from all the things of sitting at home. So Préves, let's talk a little bit about the components of the R3 now. You'd also already talked a little bit about the concept. Like you wrote it, you had an idea of how many people might join such a thing. Typically, I think David, you haven't been at a Congress so far. You have, last year we were at the Leipzig fair. We had, I think, almost all of the space there. There were rooms for giving talks like lecture halls, but also assemblies where the Hexen can have tables and everything was with LEDs and stuff, a lot of projects, self-organized sessions and so on. So Claudio, perhaps you talk a little bit about how this evolved from your first idea of a concept, like how you imagine, you can imagine how this might look like to what it is now that we have since yesterday. Yes, you just said we had the physical Congress with about 17,000 people. We came to the conclusion in about May, I think. We had monthly Mumble meetings with a bigger auger group with some people from assemblies and from teams and the auger and so on. Lots of people coming together to geek ends what to do. In about May, it was like, okay, no, we can't. We just can't do a physical event. This is never going to happen. Okay, so what are we going to do? In August, we said, okay, let's just do it completely online. And then we thought, okay, how many people could actually come? And there were so many people, actually so many people also in the community who said, no, that can never happen. How can you bring the Congress to the Internet? As I said before, we never tried to bring the Congress to the Internet, but we tried to do something like Congress feeling me, but different. Okay, so many people say, no, that can't work. Okay, how much people will we probably have? Well, probably we will handle like 1500 tickets and 500 people will use it. And we will totally be happy. That escalated quickly. And our first batch of tickets ran out, took a while because there was one of our mistakes in the build up, let's say, or in the development of the whole event or world, the whole experience that we built while the ideas were still spreading. So it was always a parallel thing. And it was not like, okay, this is the plan, then we build, then we do the, and no, but everything was... We might call it agile. Yes. For some values of agile. But yes. And so we didn't communicate so good what people would need that ticket for. And we said, okay, but we would need a login area. So we have some private messages and something how people can enter that 2D world and we still have a bit of an idea of what's actually happening. And yeah, so our tickets ran out and people were, oh my goodness. How can that happen? Yeah, well, as we learned... It's always enough room in the internet. Like why are there tickets at all? Why are they limited and so on? Yeah, but this thing like the internet is scaling limitless. The same presumption like, yeah, my electricity comes out of the plug. Not working and we had to rethink how we can scale the whole event. And people ran out into their teams and into the areas, okay, what can we do? What can we fix? Where can we... Yeah, how can we scale it? And we had a certain round of tickets giving out and yeah, now we're here with a whole bunch more than 15,000 users. With a lot of people. Yes, with really, really, really many people. I think it's an interesting aspect like scaling, David. I think I read somewhere when I first used WorkAdventure you say on the servers you use, you suggest to not be more than 100 people because then the frequency of updating where everybody is on the map gets changed and people run really fast and things like that. And this was the main issue for this Congress to scale that. So perhaps talk a little bit about your experiences, like what kind of events did you do so far? Is it mainly around 100 people or 500 people? What are the dimensions that you worked with so far? Yes, so far we've been, as I say, I'm a PHP developer. So I know pretty well the French PHP community. So when I started communicating around WorkAdventure and saying, hey, look at what we've been doing, I was noticed by people doing PHP in France. And so basically a member from the French PHP user group told me that they were having a problem organizing their big annual meetup, which is about 600 people. And they wanted to put it online. They did not know how to do it. So I proposed to organize a part of it on WorkAdventure. And basically, well, I worked like during one month, just on this, we were a team of three people and WorkAdventure was like a prototype. And we had to make it something solid enough to host an event with about 600 people. Basically we had about 100 simultaneous users at the same time during the conference. But so I was pretty confident that about we could go a bit above this limit and like 300 or 400 people would be okay. Now the main problem is that this is developed using Node.js, which is basically in JavaScript. Basically it will tell you, hey, it's super fast and so on because whatever. It's actually single threaded. So basically you can use only one core of your CPU. And there is only one core that is doing all the computation. And when you run out of power, well, everything starts to get sluggish. People start to lose connections and everything goes bad. So I started working on stress tests to see how we can scale things up. And basically when we started talking together or when you came to us and asked us how many people can we have, yes, the limit was about 200, 300 people at the same time. This last week it's much, much higher. And at the beginning of XTR we're going to start really a new version that will really scale much more. But for R3, you had guys to deal with a version that could not scale really well. And you did an excellent job at making this scale amazingly. So wow. I think it were a lot of pizzas and nights of coding and a lot of gray hairs and so on. But what is the main thing that drives the resource hungriness? Is it like the more people you have on the map, the more updates you have to send, varies everybody? Yes, actually the interesting part is that it is only about the position of people moving. It's not about the video because the video is going via WebRTC or via JTC. So you can scale JTC that's not part of WorkAdventure. Or if you are speaking to someone, this is WebRTC, which is actually doing a peer-to-peer connection. So the video is not going through the WorkAdventure server at any point in time. So we don't have any problems scaling the video. However, sharing the position of players moving on the map is really challenging. And so we made a bunch of optimizations regarding this problem. We started by trying to send the position only of people that are close to you. So if you've got a huge map, we're not sending the position of everybody on the map. The problem is that if you've got 600 people on the map, when someone is moving or 600 people close together, when one people is moving, you have to send the signal 600 times to everybody. And if everybody is moving at the same time, there's a lot of messages. And they must be shared on a single server because there is a single server that must have the position of everybody to be able to know who is speaking to who and who can speak to who. Yeah, true, I understand. So basically the challenge is here. So I think this will be interesting when I think the changes that CCC made to the source code will be published on a public repository as well to sort of learn from each other what the tips and tips are. I'm dying to see what you guys have been doing. One of the things we were talking about in the preparations was also that how can we actually manage if we have 1000 people or more joining the world so they drop into that lobby area. How can we manage to have 1000 avatars in one room? We can't. That's not possible. And we said, okay, let's divide them and say we can have a lobby room and this can have a host like, I don't know, 50 avatars. And then everyone who's joining afterwards will come to another lobby room parallel to the first one. And then until the next lobby room is full with 50 and so on and so on. I think it's a bit more complex what our guys brought up. That is the basic principle because we could not either technically nor from a user perspective put like 1000s avatars in one room. Even if we would simulate the glass wall or something, it doesn't work. It just doesn't work out. You don't see anything anymore. It's just taking too much of a space and so on. That was also a pretty interesting thing how to work all these things out. Also how to get so many people on so many maps because we have nearly 300 assemblies and lots of them were building maps. So actually you will find, I don't know the exact number, but if we save 300 assemblies, lots of them building maps like 200 maps in the whole world. How do people get there? How can they interact? How can they probably link together so you come from one assembly to the next and so on? This was a very interesting experience yesterday because I think the first two hours or so a lot of people tried to get to the Sand-D-Centrum podcasting assembly. And on the chat and on Twitter you had all this discussion like from the lobby you have to go up, up, up, then left, then there is a little thingy, then you have to go down. It ended up at someone like doing a video on how to get from point A to point B. Wow. You put it on Twitter and stick that to our profile so everybody was able to get there. Somebody found out that in the assembly list you find the work adventure map as a room and you can directly click on it and you come to the map of the assembly sort of. But things that were, I won't say totally easy in the physical space. It was still tricky in this really large space in Leipzig to find a certain spot, but there were things like C3Nav navigation system and a map of the whole area. It was easier I think and people have to learn in this new normal to adopt to that like how do I navigate in this virtual space in the same way. You know the first time I saw the map layout when the work adventure guys from RC3 team showed me the whole layout which was, it's just big. I was like okay you really really made it to simulate those times you need from A to B in that big area of Leipzig. This is pretty cool and what I heard today is that there's already a guy who's walking on his lovebunt trainer. Like conveyor belt, cross trainer or whatever and he's running on that or walking on that and he built or wanted to build a raspberry pie so that he can navigate his avatar by walking on his belt. I was like okay achievement unlocked good. So there's times you need to come from A to B already in the digital space. What I find also interesting, you mentioned this thing of technical scaling of the system but also scaling of social processes that happen inside. So when I came to the lobby for the first time there were a lot of people around me and normally if you stand close to somebody you see if he wants to have a conversation or not. Perhaps he's looking at the smartphone or writing a message or something. In this system as soon as you get close to somebody you're in a video conference and if there are a lot of people then you don't have a chance to get out. So my pattern was to run up, run and then you have this ping ping ping ping when crossing all these video conferences until I got to a space where not so many people were around where I could organize myself and see where I have to go. But you don't have that if you don't allow your camera in the start. So you do not automatically join video chat with other people so you can learn to navigate through without getting into conversations in the second one or second two. And it's a bit of adopting to it and I have the same problem. Actually we've been working, it's not really a fix that on the next version when you are running or when you are working you won't be entering into any bubble so it won't do any ping while you are running. Very good. Which is pretty cool actually because you're not disturbing anyone if you're working through them without stopping and if you are stopping probably you want to have a chat. So before we go in the last round we will talk a little bit about the future of work. I mentioned also the RC3 world just as an announcement for the people who watch the live stream if you have any questions to us, to the two of them put them in the IRC or dial in the phone number and then we do a little Q&A if there are questions left. So as a last question to both of you perhaps David first, what are your future plans with Workadventure? I had a look at the GitHub issues, what are ideas in what direction the project might go. So give us a little insight what you think what are the top one, two, three things that will happen to Workadventure in 2021. Okay well we have a few ideas, well first of all we would like a compatibility with mobile phones. Very cool. That would be cool because today you need a keyboard to play Workadventure. I read on Twitter there were people connecting a Bluetooth keyboard to Android phones or so and played it on the phone. Oh okay. They were like this one. Excellent. Yeah and then we're going to focus maybe on, sorry, letting people, today you did a good job at having people subscribe to be able to connect to Workadventure. This is not part actually of the core solution. So we're going to probably build something around that maybe with a kind of virtual phone that enables you to connect to colleagues or to friends. So this is a feature we're going to add maybe more features regarding organizing events and the ability for instance to host big events regarding scaling, regarding the ability to directly from Workadventure play a video to 100, 200 people and we're probably going to build a SaaS solution out of it. We want to keep it open source. The core of Workadventure will always stay open source. This is really important for us. This is one of the core value I believe in. At some point we will certainly have to make money. But that would be probably by allowing people to create spaces easily without having to install their own Workadventure server. So we're going to work on this in 2021. So might be a similar business model to GTSI or something like that where you have the core software open source. Yes, exactly. I was thinking about GitLab but yes, GTSI would be exactly the same. Yeah. And Cloudia would... Because it's solutions. Yeah, right. What do you think? If you look to 2021 and I think of Gulasch programmier Nacht and other events like that, perhaps in the first, at least at the first half year, next year there won't be the possibility to meet with thousands of people in person. So will the RCS3 world be sort of a prototype for something that will be used next year as well or are there no plans or thoughts about that at the moment? Well, my glass bowl is broken but... Yeah, there are many people talking about what could happen next year already. There's speculation. I know I can tell for 2020 that on day four there will be a takedown party or a takedown as always after the closing. So we said, okay, we will tear down the RCS3 2D world to keep the event character or to keep it special. And it's a bit sad but also the thing is the Congress, as it's in a physical space, is also teared down every time. And we come back and build it anew the next year. And if, big if, not when, but if there would be a second RCS3 necessary or if we do the next impossible thing, a hybrid event, which is more impossible than doing an online only event for several reasons. It's a possibility, a good possibility to build up from what we created this year. But as I said, glass bowl is broken. So we'll see. What I found interesting today in our assembly was a discussion during lunch that at a normal Congress you take a hacker space or community that meets in the physical space and you have a representation in forms of an assembly at the Congress. And we were talking about using our map that we have now after the Congress as well only as a map for the assembly as a virtual meeting space of this community afterwards. So this is I think also interesting that these two dimensions of physical and virtual are sort of swept like the Congress took part in a virtual way and perhaps the assemblies take their map and use that afterwards as well. So the Congress has a whole is teared down, but the maps might survive. Yes. I heard from several assemblies that they would use their map afterwards for their own purposes, which is pretty cool. And probably if we use it or use Work Adventure and develop it on for some future events or whatever we think of next, I see good probability that there will be some pretty creative more developments also with the maps and whatever happens to the source code and so on. It's pretty amazing to see what the community is making from it to make the best of 2020. I have two questions in the IRC. If somebody wants to call 801, then you're connected to us or the long numbers in the chat. Someone's asking if I'm sitting in a cheesy room and see that someone else on the map is passing by, I want to be able to call the person and invite them in the cheesy map. What do you think of a feature like that? If you're sitting on a table in a physical space and say, Claudia, come here, I'm sitting here with two other people, come join us. That would be really interesting. It's true that today you have to go out of the gist meeting room and run towards the people, towards the person and say, hey, come in. It's like, clink, clink. Yes, exactly. It might be a good idea. Actually, I've not been talking about the limitations we have. Basically, when you're going to speak to someone, you can be up to four people. This is four people because you're in a web RTC and when you speak to three other people, you must send your video stream three times. Usually you're quite limited regarding upload bandwidth. At some point, it does not work and you need a GC server to do something. What I would like, ideally, is to have something that is basically between GC and the bubbles we have today, where basically when we are five people, we start using the GC server, but keeping the UI of work adventure. Basically, the bubble would go bigger and bigger as many people are coming in the bubble. You could maybe work more easily towards someone and say, hey. Well, okay. And similar to the real world, the bubble of people talking getting bigger and then it splits into subgroups, perhaps, and so on. Yes, and if you want to go out of the bubble, you can at any time. There are actually big challenges regarding the UX, the user experience, and how you do this. We have been quite lucky because when we started working on WorkAdventure during the first few months, I think we got it right. And it was really completely by chance. But we got the size of the circle that was close to what we... It worked at the first time. We did not have to make any trial and error. It was just pretty good. But improving this is quite hard. I believe that. Okay, so we're at the end of the time of the podcast. Thank you both very much that you showed up and shared your experience and your knowledge. I think everybody at the CCC knows how to contact you, perhaps. But if you want to, you can say in URL or your Twitter link, Nick, how people can come in touch with you. I'm not so much on Twitter anymore. But you can find me on Masterlon at vienna-writer.literatur.social. Very well. David, if somebody wants to share experience with you or drop some lines of code or just say thank you, how can people get in touch with you? Yes, well, I'm on Twitter, David underscore negrier, n-g-r-i-e-r. And especially you can actually connect to the GitHub account, the GitHub repository of WorkAdventure. It's the coding machine slash WorkAdventure. And if you have any questions, just fill an issue. If you want to say hi, fill an issue or come on Twitter. If you want to start the project, do not hesitate. We'll do that. Yeah, thank you. And we'll give you a hint as soon as the repos are published, so you might have a look at it as well. Yes, definitely. So thank you very much, both of you. Thanks everybody in the live stream. Thank you. Thanks to the center-centrum team for running the technology behind the podcast. Have a nice evening. Goodbye. Thank you very much. It was a pleasure. Yo. Thank you.
|
During the covid 19 pandemic a lot of people have to work and join events from home. Video conferences are good for talks or workshops with small groups. But if there is a larger group and informal communication should be possible as well as serendipity events another kind of tools is necessary. In the last month we saw a lot of spacial chat tools emerge that gives you a feeling who stands close to you and enables you to talk to them (e.g. sococo, wonder, remo). One such tool is workadventure by The Coding Machine (TCM) which is openly available on github. Workadventure is also used at the rC3 as rC3 2D World. In this podcast we talk about the history and future of Workadventure and also about its use at the rC3. We will also talk about all the issues that emerged due to the use of WA in a much larger scale than it was originally intented (~100 users/map). Q&A will be possible via IRC (#rc3-sendezentrum on hackint) and telephone dial-in.
|
10.5446/52315 (DOI)
|
Welcome to this talk on Funkwale and the importance of decentralized podcasting. It's just something that I'm doing as it's just a little outreach thing. So who am I? My name is Kieran Ainsworth. I am a member of the Funkwale Association who are the officers of the Funkwale platform. We have been developing it for a few years now. I joined Funkwale a couple of years ago as primarily a documentation writer. So I installed Funkwale after looking for some self-hosting tools and I approached the project and said, your documentation isn't particularly great. Would you mind if I helped rewrite it? And from there on I've kind of got more and more involved in different bits of the project. So I've been doing a lot of work with front-end development, documentation, community management and my role on the board is that I'm a member of the steering committee, which means that I am responsible for helping with development of roadmaps and research and development into different features that we might want to add at some other time. So what is Funkwale? First and foremost, as you can see there, very nice little interface design. Funkwale is essentially a music and audio platform to put it very, very basically. But more specifically, it is a free and open source project. It's a self-hosted server software with a front-end web application for playing music. And the thing that kind of sets it apart is that it is federated. So it's built on the same software as other federated applications such as Mastered-On, Plaremar, PixelFed, PeerTube, Real2Bits and all the others. We all use the same software to interact with one another, something called the Activity Pub Protocol. And basically, it just allows us to be a bit more interactive with other Funkwale servers and also other software in the Fediverse. And when Funkwale started up, it was primarily focused around music. The name comes from the fact that the original developer of the software, Agap Berrio, wanted a free self-hosted version of Groove Shark, something that she could put music into and then create playlists and radios from. So that's kind of where the pedigree came from. We come from that music background. But nowadays, we're focused on many things. Music collections are still part of it, but we also have audio publication tooling and content sharing as part of our sort of genetic makeup. So a little while ago, we were looking at our roadmap. So around about September, October, 2019, we started to look seriously at where did we want to take the project. At the time, we had just moved away from having Agap as essentially the benevolent dictator for life and we're looking at moving towards a more democratic system of governance where we would ask the users to provide us with insights and guidance on what they would like to see in the platform. And when we started approaching them with options, one of the things we found was that podcasting was a very, very widely requested feature, which was something I don't necessarily think we were expecting, but it was definitely something that people were very interested in. At the time, the Fediverse, in general, lacked a proper sort of platform for things like podcasting. We had music, we had so I'm just going to adjust my volume, somebody's saying it's a little bit low. We had music, we had video, we had things like micro blogging and we had image sharing, but we didn't have podcasting. So that was something that people seemed to be quite interested in. So when people came to us and suggested that, that fitted in quite nicely with another thing that we were looking to do in general, which was content publication. So we looked at it as an opportunity to develop an entire new structure, not just around podcasts, but also around music publication. So that we were moving away from just hosting your CD collection and maybe some bits and pieces that you had done yourself to actually publishing the content and putting it through to the Fediverse directly. So that was kind of the background as to why we got into podcasting in general. Very quickly, we saw that there were going to be a lot of challenges with this particular bit of work. The biggest one really was we as a collective didn't really know all that much around podcasting. None of us were podcasters. We listened to podcasts sometimes, but not very often. I myself only listened to a few. So we very quickly realized that we were going to need to approach people who did this sort of thing all the time. We were going to need to ask people who knew about this stuff, had sort of experience working with lots of different bits and pieces in the current climate in order to build something that fit with their expectations and also addressed some of their frustrations, anything that frustrated them. The other problem was, as I mentioned before, we are a music publication platform or we were a music hosting platform. So this podcasting and publication stuff was not in our DNA. It required quite a lot of architecting on the back end to really get something that would work for publication. We needed to rethink a lot of things because we'd been making assumptions about audio in general based on music collections, which of course is a very different thing to podcasting. The other thing we didn't really know or understand was, what should it look like from beginning to end for a podcaster to publish something? We kind of understood it for musicians. It was a bit simpler. You'd have albums and you would have tracks that go in those albums, but we didn't really know all that much about podcasting. So in order to get that information, we decided to form a podcasting task force, as it were. And this task force basically consisted of members of the Funkwell Association and a group of people from the podcasting subreddits, from the Fediverse, people who made podcasts all the time. And we basically brought them all into a chat room and we said, okay, so if we're going to design this, what do we absolutely need to do? What do we need to hit? What do you want to see? And what would kind of encourage you to come over to using our software to publish your podcasts, if that's something you would like to do? And it was something, the other thing we needed to work out was, we didn't really have an insight as people who didn't publish into what the competition was doing. So I say the competition, what other people who made this stuff were doing. So we very much needed to get that information from a firsthand experience and sort of pull that in to make sure that we were doing it correctly. And what we found was basically podcasts are hard. They're quite complex things where, especially the complexity exists on the back end, it exists within the software, but the user should be really getting a very simple front end to do things with. So we found that basically, whereas with music, Funkoil really didn't handle a lot of the more complex stuff like tagging. We let music brains handle that. If we were going to be publishing, we needed to start actually taking on board that complexity and sort of facilitating it in our publication layer. And podcasts of course offered a slightly different way of doing things because there was less metadata to be included and it was less catalogued than something like music. The other thing that was very, very strongly put forward by the people who we talked to was that there exist in the podcasting world standards. We have certain ways of doing things and that has to be retained no matter which tool we use. So for example, we need to use RSS. We absolutely have to include an RSS feed. Images need to be correctly sized. The RSS feed must be consumable by tools such as iTunes and Apple Podcasts, which means we have to include certain fields that only exist for iTunes and Apple Podcast. The other thing we came to realize was that people were going to be using us as a podcast publication tool, but we also needed to act as the podcatcher because our current make up at the time was to be a music hosting tool, but also an application which played music. We needed to give that same experience for podcasts. It needed to be that people could publish content, but also take the content they already liked and put it into Funkwell. And then the last sort of big thing that came from this was the sudden realization that if you're going to have two or more servers talking to each other a lot more, you're going to need to really strengthen the moderation tools that you have in place, especially when we're talking about user generated content. The scope for abuse on that is quite significant. So we needed to give users tools to be able to report things. We needed to give people tools to be able to block certain stuff. We needed to give administrators the ability to use things like enable lists so that they could prevent federation with certain other platforms. And we needed to give them the ability to ban users, take down channels, that sort of thing. So this was a whole lot of architectural design for podcasts, which it was really the podcasts that drove us to it. And what we came out with was basically a hybrid of a traditional sort of podcast overview and a Fediverse channel. So in our world, we have podcasting channels and music channels. And from what you can see in that sort of screenshot, it gives some sort of basic information. You get your artwork, you get your episodes, we can split things up into series, which was a big request that people had was the ability to create different series within the same channel. We have the ability to subscribe, which I'll go on to in a second. And obviously, if you're the channel owner, upload new content to make sure everything is working as expected. The important bit here that we have is the information about what's in that channel. So in this channel, this is mine, ignore it, it's terrible. But there's one episode and it's been listened to 13 times. And this was important information that we sort of worked out was needed in order for people to get a grip on like how are people interacting with my content. But taking that on board, we went ahead with the subscription capabilities. And as you can see, in the screenshot, we have kind of three options in every case. The first is, if you already have a Funkwell account, you can subscribe using your Funkwell account to that channel. And it will be one of those things that appears in your feed when a new episode is uploaded, you'll get notified that there's a new episode in the front end. The other thing you can do is subscribe via RSS. So going back to what we were saying earlier, we took a lot of, we put a lot of effort into making sure that our RSS feed was compatible as much as possible. And that anybody could go onto a sort of an open Funkwell channel and subscribe without having to sign up to Funkwell. Because one of the things we very quickly realized was we don't want people to feel like they have to sign up. We want people to be able to enjoy the content no matter what. And that really should be up to them where they listen to us, whether they listen to us on Funkwell or some other podcatcher. And the last one is subscription via the Fediverse. So that enables users to follow a channel in much the same way that they would follow a mastodon account or a plaramo account or something similar. So we're trying to hit all sort of boxes there of how you can keep up with somebody's content. The other thing that I've been doing some work on recently is more front end stuff, but it's just making sure that we sort of point people towards adding new content where possible, either by themselves, creating new channels or subscribing to things via RSS or via the Fediverse. So really pushing people towards that more, really pushing people towards that more sort of you know, creation element. We want people to create. So that's with the basics in place. This was the development work we did over the past sort of year or so. It's been a wild ride. There's been a lot of content that's gone in, a lot of changes made. There's still some changes to come. The most current release doesn't have some of the newer tools that are around podcasting, such as dedicated podcast searching and sort of wider accessibility of subscription tools. But we're not finished. There are still items on the roadmap that we would like to complete and still items that are not currently on the roadmap, which may need to be added in future to really help us to get involved with podcasting more. Because what we found is this is a market that we very much have enjoyed working in. And it's one that actually has proven quite popular with people that you know, people see Funkwell as a podcasting platform now. Even if you know, it was originally supposed to be music. This is how it's kind of evolved. So what do we have to kind of consider next to take a look at the future? To kind of consider next to take Funkwell to the next sort of level of you know, being a proper sort of alternative to what's currently out there. The first thing that strikes me as necessary is Funkwell currently allows you to import RSS feeds from external podcasts. It currently allows you to follow podcasts on the Fediverse, on Funkwell, and it currently allows you to publish your own. But what we don't have at the moment is any way of finding external podcasts. You still have to leave Funkwell to go and find the RSS feed that you're looking for. You still have to you know, go and see where things are, go and find them on something like iTunes or feed or Spotify and grab the RSS feed and bring it back to Funkwell. Which of course from a user experience point of view, it is not great. It's basically meaning that Funkwell is not yet the one-stop-shop for podcasts that we might want it to be. So one of the things that I would quite like to see you know, come in in future is a podcast discovery for an external sore front. I have built myself a kind of proof of concept of how we might do this using the iTunes API. But there are different things out there such as feed and others that we might want to consider looking at. The other thing is an improved sort of publication workflow. At the moment, the publication workflow, it works, things go in, you get a podcast out of it, it generates an RSS feed for you. But we have had people raise issues with it specifically around how do I edit metadata during that upload process. The problem I think is because the way we designed the front end, it was more of a, it was more in line with how we'd worked with music previously, which is to say upload many files which have been previously tagged and just kind of let them be. Whereas of course if you're doing an upload of podcasts, you want to basically upload an episode, title it, tag it, put some artwork with it, give it a license, do all of that stuff, and then move on to the next one. Or if you know you're going to be uploading multiple episodes of a series, you might want to have a tool say that you can put them all in a series and say number them automatically. At the moment, we don't have that. If you upload multiple things, a pencil icon appears next to each one and you can click through them and edit them all, but it's not very obvious how you do that. So that's been raised as something that needs to be addressed. We've had some designs submitted for how we might go about doing that, which looks to be a lot better. The other one is something I'm going to come onto in the second part of this, and that is the introduction of links to donation services. At the moment, hosting your podcast on Funkwale is great, but it's the same as hosting it anywhere else. What we want to be pushing people towards or encouraging is this idea of supporting people who create. The best way to do that in our eyes is to promote the idea of donation services and promote the idea of helping to support the podcast that you like. We don't want to be a payment handler, obviously, but we do want to help make it a lot more visible when there is a service that you can actually put money towards. The last one, it's been on the road map since channels were introduced. It's very, very complex. As somebody who does not work on the back end, I don't really have the technological knowledge to go into it, but there is this idea of channel claiming where if somebody uploads some music to a channel and it's not their music, the person whose music it is should be able to claim that channel and take control of it. As you can imagine, that's a very, very complex thing to do, particularly over federation because you have all of the different implications of the wider fediverse to take into account there. It's our biggest boon. It's also our biggest challenge day-to-day is working with that federation. But that moves me on to my next point, which is all about sort of decentralized podcasting. This may seem like a strange concept to people who do podcasting because podcasts are decentralized by design, really. I didn't know a lot about podcasts going into this. As I say, it was very much a learning experience, but the more reading I did into podcasts as part of the research that we did for this, the more fascinated I became by how they work and how they're set up. The thing that struck me was podcasts occupy this unique space of being very, very disruptive low-tech, certainly audio podcasts, video podcasts as well, sort of disruptive low-tech standards-compliant ways of communicating a lot of information. Podcasts can be hosted anywhere. As long as they generate a valid feed, anybody can capture them into a podcatcher and play the files linked using a relevant piece of software. That means that the potential listener base is enormous, much more so than anything based on a single platform, a centralized platform. This was one of the reasons that when we were designing the podcast publications tools, we were so emphatic about being a part of that existing infrastructure, making sure that we didn't try to lock people into our way of thinking, but instead follow what podcasting was already doing because it already seemed pretty great. We had things like RSS feeds, we had good encodings being used like MP3, which could be so widely used. It's kind of ubiquitous at this point. That's a really important part of it. The reason that this came to my attention was during some of the conversations we were having with podcasters, and specifically when we were looking at Funkwell as a podcatcher, so something that consumes RSS feeds and plays them back, somebody had said something about a specific podcast. I think it was called the last podcast on the left. They said basically, it's a shame, I won't be able to play this through Funkwell because they are going Spotify exclusive, and so they're not producing an RSS feed anymore. This worries me slightly. It's a concerning trend away from what podcasts stand for, from my understanding of what podcasts stand for. Because when you go exclusive to something like Spotify, you have the introduction of DRM and you're creating a walled garden around content. And certainly for content that used to be free and open, so it used to follow the same rules as everything else, for it to suddenly go into a platform specific publication is a big break. There are a couple of reasons for this, but the primary one is let's say that with podcasting, the only limitation for a user is that they have a machine that has software that is capable of listening to that podcast. It's capable of reading the feed and playing back the audio. That's your limitation. If you put it onto something like Spotify, you actually divide this into four, four different experiences. The first two are users who live in a country that have access to Spotify. And those people will have two experiences. One, they will either listen to an ad supported version of the show. And the second one is that they pay for a subscription to the actual podcast, sorry, to the actual platform. Then you have people who go into other, who live in other countries, which don't have Spotify served up to them. And those people have more experiences. One is that they have to pay for a VPN and basically access Spotify externally using the ads. And then again, access externally using a subscription. And then there's that lost fifth one, which is they don't have the money for any of this. So they can't listen. So we fractured the user base by centralizing the content into a certain place. And the problem with something like Spotify is at that point, when you've done that and you've taken that sort of, you've taken that decentralized nature away, what you have left is not a podcast, it's essentially corporate radio. And like I say, for something that started off as a podcast, there's something that started off freely available. Having it move in that way is somewhat concerning. But at the same time, we have to look at why does that happen? And generally the answer is podcasting is expensive. Everything that takes up people's time is expensive. And podcasting from the little I have done of it is very expensive. You've got to take the time to script and record and edit and work with all of that audio and video, you've got to find a place to publish it, you've got to do all of the promotion around it. And if you are looking to make money off of it, you have to search around for sponsorships and add deals and things like that. So when a company like Spotify comes along and says we'll take all of that complexity off of your hands, we'll give you a good portion of money to pay your staff and to make sure you can make a living, it's very, very tempting. And you can kind of understand why it happens. And one of the things that we kind of found was that the free software community in general is not always the best equipped to deal with that kind of thing. We can't make a counter offer to that. Our weapon here and what we can do about this is, as I've said before, kind of try as much as possible to make it easy for people to make the decision to continue listening outside of those platforms, make it easy for them to continue to support their favorite podcast directly, which means lowering the barrier to entry for payments, lowering the barrier of entry for sharing, for supporting, for getting things out there. But it's an inherently sort of difficult thing to come up against and something that, you know, we haven't found the answer for yet. It's something we've done discussions about, how we might help podcasters support themselves, how we might help people support podcasters and musicians as well as this stretches to all areas. But the answer is a difficult one. It's not one that sort of, you know, comes very easily. Now, I've purposefully sort of left this, I think I've got it exactly half an hour, that's good. I purposely didn't want this to go on for too long. That's kind of the journey that we've had. The first thing is, podcasting is fun. From a sort of user perspective, podcasts are wonderful to listen to. Having a good place to put podcasts is great for, you know, people who make them. From a software perspective, they're a bit of a nightmare, especially when they aren't what your software was originally sort of set up to do. There's a lot of work goes into it. It's, I think it's underestimated in general. But, you know, it's worth putting the effort in to get something like that. Free software world, the open source software world, we still face some significant challenges with assisting people with things like anything to do with finances is something where we struggle. And it's because we don't have that much of a monolithic approach. It's because we don't have that central financing. So it tends to be that, you know, we need to focus more on improving the experience of working within a sort of direct donation world and a direct sort of way of working. And yeah, this whole sort of trend of existing podcasts being picked up by companies and, you know, things that used to be so free and easily accessible, becoming walled off inside. I only know of Spotify doing it, but I can imagine the same thing happening with Apple Music and Deezer and a lot of others, is kind of a concerning move, which is diluting what was really quite a fantastic sort of idea. And it's a shame that it happens to some of the ones that people find, you know, people connect with the most strongly. I think two of the most popular podcasts that have been picked up are things like Joe Rogan and The Last Podcast on the Left, which is, it's a shame because high profile things being taken over has meaning and, you know, it will normalise it in my eyes at least. But with the use of free software tools, with the use of, you know, these open standards, real podcasting will never go away. It will always, you know, bubble up underneath. We will always see people continue to, you know, to put things out. So yeah, it's not all hopeless. This wasn't what that talk was about. It was more just about this is something I think is very important and something that, you know, as a project, we're really striving to support. So I think that takes me to quite now 35 minutes, which is exactly what I was aiming for. If anybody has any questions, I think that the, I think that the number has been put into the chat. It's plus 495361 890286 8001. And if you're using Event Phone, it's just 8001. I'll just have a look and see if anyone asked any questions in here. Let's have a look. Yeah, how do I find, how can I find a funquel instance for a podcast I'm planning that suits me, my needs and my content the best? Yeah, so the link there is a good idea. The get started guide, we actually have a sort of a pod picker, we call it, which is just something that sort of takes you through the summary of different pods, which is what we have we referred to servers. People can write a summary of what sort of content they want them there. The two biggest servers are open audio. And I think Tanuki tunes, which is my server is quite sort of big and open. There are lots of servers out there. So, if you find one where you think it would fit in here, then great. You know, usually just find one that has open registrations and sign up. Or if you're feeling brave, install it for yourself. It's a fairly easy install. There are some hosts that will host it for you. They're listed on the funquel.audio website. So, if you just wanted somebody to set it up for you so that you could host a podcast, then yes, you could sort of put it in there. Do you know the podcast index.org projects? I don't personally. I will look it up after this. That looks interesting. If there's a solution that is to be found that could work for podcasters, could it also be applicable to indie musicians or are the two fields way too different in order to accommodate both? I'll just finish this one. I think I've got a telephone person coming in. So, if there's but I mean, yes and no. If we're talking about supporting financially, then yes, in theory, we already have some of those. I mean, there are already donation platforms which kind of work for a multitude of things. So, really, I think we should be trying to sort of lean into things like Liberapay, Kofi, maybe Patreon, rather than sort of trying to solve that problem within the publication software. Because those features already exist and because that's already quite well established, having better interoperability between those tools is probably the best way forward. You just want to take the complexity away from the person listening. It'd be nice if they had something like, for example, you're listening to a song, you really like it. So, maybe you preload a certain amount of credits to your account every time you sort of play a song you really like. You can throw some credits that way. I don't know the complexity of the actual implementation is beyond me a little bit, as I say, I'm just a front-end guy. But I don't think there's that big a difference between them from that sort of perspective. Yeah, the servers were open.audio is the main sort of flagship server. My server is called tanookietunes.com. I'll put that link in. But there are lots of servers, as I say, if you go to the actual funkwell.audio website, they're there. Why should I, as a podcaster, decide against a centralized platform with lots of users for a decentralized one with only a few users? How can we dramatically increase the visibility of my project, now my product on Funkwell? It's a good question. The thing is with a centralized platform is you may be on a platform with a lot of users, but that doesn't mean that you're actually going to be seen by a lot of users. There is a lot of stuff on Spotify which never gets played. That's just the fact of it. There are so many, there's so much content on there that you are just, you know, you're just a grain of sand. Obviously, if you've got a sort of an established fan base and you've got a lot of people already listening to you, then that doesn't affect you. But in that case, it also wouldn't affect you if you were decentralized. Those same people would still be listening. And in fact, you would be able to reach more people. Podcasts kind of allow for word of mouth in a way that something centralized doesn't. It can be passed around a lot more, sort of virally. As for Funkwell, I mean Funkwell's greatest strength is the Fediverse with this. So the fact that the audio can be shared between people's servers and sort of streamed directly from server to server, the fact that it can be followed on a multitude of different platforms, is where the visibility would come from. It's that sort of viral sharing. But the fact that it also works outside of Funkwell, it also works just using a traditional sort of podcatcher, also plays into its favor. And that's where Spotify kind of falls apart. Yes, Spotify has a lot of users, but you do kind of cut off an entire core audience, which is the concern. Yeah, it's not a, there's no simple answer to this. It's kind of the way it goes. But it's, I feel like the point made earlier in the chat, which was that if you centralize it and you lock it behind a wall garden, it's no longer really a podcast. It kind of stands. It's not a podcast technically anymore. It's something different. And that's not necessarily a bad thing, but it is true. It's no longer what it was originally supposed to be. So, you know, it is best, I think, to try and make use of, you know, tools that fit into the existing podcast infrastructure. Okay, that looks like all of the questions. I don't think anybody's calling in. Which is fine. So, with that being the case, if there's no more questions, thank you very much for listening to me ramble about podcasts for 40 minutes. Obviously, if you'd like to check the project out, it's just at funkwell.audio. But also go out and support your favorite podcasters, whatever platform they're on. You know, God knows they'd appreciate it, especially in these times. Thank you very much. I think that's where I'm going to call it quits. I think we have a phone call. Okay. Okay. Someone on the phone? Yeah. Hello. Hi. Hello. Oh, wait, I'm live. I'm excusing for that one. I just want whether you're familiar with the website called for Godify.com. You brought up earlier that there's like tons of audio that has never been heard of that basically site. So it's like a song or a piece of material Spotify that has never been heard of before. What was the name of the site again? Sorry. For Godify.com. Oh, no, I've not heard of that. That's quite interesting. So it just plays stuff that doesn't get played much on Spotify. Yeah. Literally so with you, like a random song or a piece of audio that has been like distributed on Spotify but never heard before. I even heard some tracks from 2008 at nine. That's great. I really like that idea. Yeah, that is a genuine concern. I used to use Google Plus a lot because I'm that kind of person. And I was part of sort of publishing musicians club. And I had people on there who published on Spotify and they never got listened to. You know, it does take quite a lot for you to actually get picked up by Spotify's algorithms and to be sort of prioritized. So it's not the best solution for podcasts. There's I think there's a reason that only already popular podcasts are getting picked up for Spotify circulation. But you know, that sort of project sounds really interesting because it'd be fascinating to see what gets forgotten down the sort of cracks of the seat. So just speak. It's also a big job thing to play the game of the algorithm and stuff. I think that's one of the main reasons why I'm making music myself and so it's all mine personally. So that's why I'm a big fan of music. Yeah. Yeah, it is. It is. Anyway, thanks a lot. I'm not affiliated with the site of the site of all the three rooms and decided to serve that. Thanks. Yeah. No, thank you very much. That's really interesting to your own. Thank you. Bye. Okay. I thought we don't have any more calls going once going twice. Okay. Okay. No more calls. Okay. Thank you again for coming to watch. And I hope you have a great rest of your conference. Looks like it's going to be a lot of fun.
|
Podcasts are inherently decentralized. As a medium, they rely on open standards and simple, straightforward tools to allow them to reach as wide an audience as possible. In recent years, however, the desire to monetize the format has started to change the shape of the podcasting landscape. With the acquisition of popular podcasts, platform holders such as Spotify have started a precipitous downward trend into centralization which flies in the face of what podcasts stand for. Luckily, there are many emerging platforms within the world of free software which aim to make the publication and dissemination of podcast content easier, more accessible, and more decentralized to combat this trend towards neo-corporate radio. As the developers of one of these platforms, the Funkwhale project has faced and still faces unique challenges designing the necessary tooling and anti-abuse features to enable users to host their own podacasting platform. This talk explores some of these issues, some of the solutions we have found, and why we believe it is important that users have free and open alternatives to centralized podcast hosts.
|
10.5446/52318 (DOI)
|
Hello everyone, good morning from Brazil, Rio de Janeiro, and good afternoon to you, my name is Ana Carolina, I'm a computer scientist here in Rio de Janeiro. My research area is algorithmic and privacy, and I'm currently working on a project to find a bias in algorithmic and social networks. I am 25 years old and since that age, I recognized myself as a scientist. In fact, I always liked this area, but I always bought a rate with the fact that I didn't see many black scientists in my classes at school and in my textbooks. So I started this podcast, because it's for me, it's like recognizing these scientists help us in science. The illustration that you see is from Tainara Cabral, a friend and illustrator from Rio de Janeiro too. My research with the podcast is seven years old. In February this year, I decided to put parts of the research in a podcast format, and now I will show a little of the creation process about this. The first thing I divide in my podcast, my process, in three types. So first is science, the second is music, YouTubers, futurists, it's like a black culture. And the third is a school, like a black story in the schools with teachers. And my narrative process is like a book. I changed it several times, but the last one, I started using war, war from these labels. In fact, I am very fond of these three productions. So I started to study in depth about the influence and the research behind them. The next is a three-invitation to learn about the Afrofuturism. This is America, it's like a punch in the stomach in the racines that the attackers. But so, in the Black Panther, it's my favorite film. It's a possible vision of what it would be like if Blacks had more access to it and more opportunity in the technology. Well, starting from that, we see a cutery black. I teach a program since 18 years old. So I started using these classes, my students, to test it and create mini-products on the timing and mini-audio about Afrofuturism and the black scientists. So it's a good process, fun process, I love this process. After this, I divided my research into three big areas, black and cutery, protagonists and technology. I will explain more about this. So Afrofuturism is like the other possible, the future possible for black people and black scientists. Like a protagonist, this is a design that was designed and we are rescuing and in technology that's present, helping to guide our possibilities. After that technology, it is not just digital for me. It's like, okay, it's a technology too. Technology for me, it's like the artifact. So and with this process, after this process, I created the narrative about podcasts. How the episodes work? In three parts. My episodes is divided in three parts. Introductions with the cutery of black and the countering of the scientists born, economy, language about this country. The conflict that is the history of the scientists and the solution that he shares with us. How that research impacts our society. The name Oggyé, this is a curious name. It's a name of the technology from the African religions and the name is greeting to this orisha. How of you help me in the process and the process about visual identity and the concept about the podcast. Today, okay, this is like my process. I don't draw good, but I try. It's a good process for me. And today, the podcast is already reached 22 countries. For me, it's very fun. It's very good. It's very okay. I think the hour started in my episodes in English because all the episodes of a podcast is Portuguese because it's my natural language. But with these 22 countries, with my podcast, it's okay. Next year, I will start making episodes in English too. For me, it's a pressure to make this podcast. The other data, age, 35 years old, 18 years old, it's okay. It's like since young people. And the people just do the people too. And the curious, male and female, it's like the same. It's almost the same publicly about my podcast. So I'm using free platforms for creating, for research, like Anchor. It's a platform I use for creating and editing my episodes. And it's opportunity for changes, changes perspective. And for me, particularly, it's a great pleasure to help change the perspective and help to risk a history of the black scientists who contributed and contributed to the most human accessible science that actually helps society. So I'm a little nervous. It's my first time in the event. And the second lecture in English in this year. Okay, for me, it's a good test. And thank you so much. I'm very happy to stay here. And see you next time. And in my social media Twitter, I use a lot Twitter. And email. So thank you. Bye.
|
Hint: the podcast will be in portugese. Ogunhê, podcast about the contributions of black scientists to the world: the goal is to rescue the history of scientists and give visibility to the science they make, so that black people feel more represented and also seeking to show how the achievements have helped society in that we live. The podcast on spotify was heard in 22 different countries. The idea is to talk about the process behind the episodes and the research of black scientists.
|
10.5446/51974 (DOI)
|
Good morning everybody. Sorry for this little problem. So as a classical archaeologist, but also responsible for the of the center of the department of art history and archaeology at University Paris-Wampante-Onsor-Bonne, my work aims at placing in images and digital technologies at the heart of the teaching of art history and archaeology. Another task is that of curating the department's collections, that is to inventory, study, valorize and disseminate the university heritage. It includes among other collections a very interesting series of clay stamp seals from ancient Iran, a gilded bronze model of the ancient city of Rome, a collection of pottery shirts from the Near East, a large film collection and thousands of slides and photographs as well as the archives of various professors who have made the history of the department of art history and archaeology in Paris-Wampante-Onsor-Bonne. In order to combine the new needs, link it both to the digital pedagogy and to the digital dissemination of our heritage, not only to our students but also to a wider community, I look at for a user-friendly digital architecture. The idea was to focus on content and methodology and not on technology, but often the choice of a complex digital solution not only makes us a social scientist dependent on the help of IT specialists but it also prevents us to understand the logic and the rules of knowledge production. Conversely, I wanted to place my project in a win-win system while the university acquire data and reach its digital archive, the students involved in the project should also develop their skills for the job market. Accordingly, I choose a very simple modular open access service, CEMS, Omega and in November 2016 I launched Virginia's the portal dedicated to the heritage of the department of art history and archaeology of Paris-Wampante-Onsor-Bonne. Right from the start of the project, two issues arose. How to transform the university heritage into a realistic and teaching instrument through new technologies and second, who should be the protagonist of this operation? In order to answer these questions, I will present here various case studies which show that the digital dissemination of heritage in all its forms is a complex process requiring the definition and the follow-up of a precise work procedure. Moving from the materiality of the objects to their digitization and dissemination, the following steps were implemented. So you can see two stage materiality of the collection and digitization and dissemination of the data, so conservation, understanding and contextualizing, analyzing and describing for the first stage and for the second stage data structuring and processing to the M3D. Of course, data entry, uploading and storage with a special attention given to the metadata and line collections in Omega and then data visualization with virtual exhibitions. The three first steps were necessary because our collections had long been abandoned, forgotten, buried under layers of dust and in some cases damaged. The first task was to sort and to clean the objects, of course, reconstructing the context in which they were made and evaluating the potential of which one as well as their pedagogical value. This was achieved in collaboration with several department members, but most of all by students who were placed at the core of the project. Many BA and MA students from our department, but also Erasmus-Plus students have gradually become the three protagonists of this project, dedicating many hours of their internship throughout the various steps of the Schoen Operatoire. Students immediately realized that university collection were not only an antiquarian object to admire, but also an agent of their out-training, it is important to emphasize that these students are specializing in archaeology or art history and know how specific IT skills. If GIS and databases courses are well-established in an archaeology curriculum, the digital turning humanities requires the implementation of all your skills within their academic training. The active involvement of our students in this project was meant on one side to improve their digital skills through their ability to deal with the archaeological artifact, the archive, the collection in terms of description, analysis, interpretation and conservation. On the other side to strengthen the digital training of our students who were led to find out solutions to the several technical or documentary issues that encounter into process. I move now from theory to practice. This is not a place to present in detail all the results of our digital management dissemination project. I will just present an overview of the project focusing on the main issues and solution. One of the most important collection of our department consists of thousands of slides and photographs for teaching and research. Some are pictures from books, other are original photographs by professors over the whole 20th century. In the first case, it seems useless to scan these slides or photos for which we have no copyright ownership. Nevertheless, we established a global inventory in order to identify the subjects and understand the teaching methods in the department before the advent of new technologies. By the way, these images can be accessed directly in the original publication they were taken from or from to the large iconographic online databases made available by for example, Persos or Arachne, two well-known repositories of archaeological iconography. The slides of the second type are of course of greater interest. They mostly offer and publish images of archaeological sites that have completely changed today either because of the progress of excavations or in the case of the nearest due to their destruction by eases. We also own a large series of slides belonging to one of our past scholars Ernest Will. He had traveled extensively between the 30s and 70s of the 20th century and he left an impressive number of photographs and slides. Thanks to these pictures, it is now possible to travel along the Nile River in the 50s or to discover a forgotten and inaccessible Syria. In order to upload these pictures on our online portal, we proceeded to the inventory, to the digitization and the referencing of large parts of our collection. During these operations, spatial attention was given to internal metadata, a real challenge for our students. We choose to open source software, X and U and GeoSetter, which allow with a few clicks to modify the textual and geographic metadata embedded in the image file. These indexing and referencing operations are now necessary so that our resources don't remain lost in translation in the world wide way. In the long run, a precise index section will allow to find them in national and international research platforms such as EZDo in France, which are based on fairly sophisticated queries of metadata. I move on to the collection of pottery shirts from the Near East made out of about 2000 fragments whose creation went back in the 70s. This collection includes pottery collected on the surface by professors during their side tours, with the exception of a small set coming from a systematic excavation on an Iranian site to rank Tepe. The provenance sites are located for the most part in Mesopotamia, but also in Arabia. From a chronological point of view, most of the shirts date to the Neolithic and to the Bronze Age. The main reason of this collection is to establish through the technological study of a fairly wide spectrum of pottery productions, a reference atlas for students and for pottery specialists. We proceeded first to the study of the collection, of course, through a systematic inventory of the shirts and the photographic coverage. For the technological analysis of each shirt, we elaborated a descriptive course to be filled manually, and a relational database that represents its digital translation. For the database, we choose the file maker proprietary solution, which allowed in a few weeks of work to come up with a very user-friendly and powerful tool. Even if the interface is still under construction, the data entry has already started. It allows to disseminate a collection that until last year was hidden in a metal drawer, but also to train students in the principles of relational databases. Thanks to FileMaker Server, which is provided free of charge by human national research infrastructure, the database is accessible, aligned to project managers and training students. Once completed, it will be open to the general public to make it with teaching and research tools. To further promote this collection, we developed 3D photogrammetry models of the more interesting shirts. For the data elaboration, we choose the proprietary software photo scan, now MetaShape, for the first step at least, point and dance cloud, and the open source software mesh lab for the further modeling. The 3D models are available on the SketchUp account of the university. This is another output of our project, if you want. By training students the photogrammetry, we initiated the production of 3D models, which are now used in our pottery courses, while allowing the dissemination of the data beyond our university. Once again, training, teaching and heritage dissemination are joined together into an eye-win-you-win logic. In 2016, as I said, we launched Virgilius, a common access portal gathering all the departments iconographic and heritage resources. As I said before, the tool, this tool is based on Omega Classic, and not Omega S, which has been developed by the Roy Rosensweig Center for History and New Media, that is by the same people who developed Zotero, the famous open source bibliography management software. I choose Omega as a practical solution because it is free and user-friendly. No technical skills or special server requirements are necessary. It operates at crossroads of the web content management systems, archive management systems and museum management systems. In fact, it allows users to create or collaborate on a website to display collections or to build virtual exhibitions. To extend its functionality, it offers a whole series of themes and plugins. From my point of view, one of the most interesting plugins is Netline, released by the University of Virginia. It is an interactive tool for telling stories using maps, timelines, and exhibition resources. This plugin, like the story maps developed by Esri, allows to make accessible to the general public what was once Zoterec, creating maps with various kinds of special data. To manage the metadata, Omega uses the Dublin Core Metadata standard as the most important French national libraries or the national scientific search engine is it or do. It perfectly fits the pedagogical and institutional needs of our department. Omega manages users according to various access privileges. A super user who has all privileges of administration and management. Administrator who has the permission to access and modify documents, collections and tags, and a sample contributor, the contributors that these students who have the right to add and edit the documents they have created. I now come to Virginia. Although the layout is still a work in progress with some technical details that need to be fixed, you can already find a significant part of our collections. In the toolbar, you will find several scroll down menus allowing to browse the available contents, which is actually and thematically organized. I will let you discover the portal by yourself. And of course, feel free to submit comments and devices, advices, sorry. What I want to focus on now is the panel on the left. This is basically the core of our portal, which includes all image libraries. That means digitized images, as I said before, archives and virtual exhibitions created from the images, images libraries. We insist on the realization of the collections often organized in subcategories, which were including one or more items. The item, as you know, represents the primary entity of the Omeka structure. Except for the archive collection, the older collections are classified by topographical order. His special attention was given to the geolocation of each item, that means of each image or archive. I'm sorry, as you can see here, we have some problems with Google Maps, and we will move in the next week to a new cartography based on Open Street Map, I think. I mean, everybody knows the problem with Google Maps now. And Google Earth. The idea was to digitally and virtually reconstruct the original context of objects, which one story in the boxes or shelves are now alienated from the unique location in time and space. Another important feature of our portal is the white chronological and cultural span of its content. It therefore allows through a simple search bar to get aggregation which cuts across space and time, joining together materials from a wide variety of locations, chronologies, and collections as well. Here you can see a simple search that put together several materials from different chronologies of different chronologies and locations. In Virgilius, the metadata can be filled in using free text, but in most cases, we implemented a list of values of codes from a controlled vocabulary. As you can imagine, referencing is a fairly detailed operation, and in each case, we decided to add specific metadata in addition to do of the doubling code. You can see the additional metadata. This concerns the archives. Here you can see the archives of Bohumil Suski, a Czech scholar who introduced European bronze, nylon, and iron age archaeology into the University of Paris in the early 70s. His archives consist of personal notes, drawings, press clippings, excavation files, and a large amount of students' homeworks and exams. These archives provide a mine of information for understanding and analyzing the introduction of a new discipline into a curriculum. We followed a precise documentation protocol from sorting and ranking to inventory, from scanning to uploading and selection of the two more interesting pieces. Through that uploading, we have created a detailed hierarchical tree which perfectly corresponds to the physical organization of the archives in the shelves of our department. In order to further use the Suski archives, we created, as to the netline plugin, a virtual exhibition showing on an interactive map the main stages of the scholar's education and scientific career. This is a good example of a maker's flexibility which not only allows to store and manage that data, but also to produce new data. In other words, from data collecting and data processing, we finally reached data visualization, a completely new resource which has great realistic potential produced by students and to be used by students and scholars. Despite various technical issues and textual or iconographical errors which will be corrected in the next week, we are proud that through this experiment we have been able to combine material and virtual, traditional approaches and digital humanities, 3D, 2D and 3D, description and visualization, teaching and dissemination. Thanks to the involvement of students, our department is promoting its heritage and creating a digital world, but students themselves have been able to take profit of this project in their academic training and to add a line to their resume in the context of a job market whose cradle has become more than ever think digital. Thank you for your attention.
|
The lecture was held at the online conference "Teaching Classics in the Digital Age" on 15 June 2020.
|
10.5446/51976 (DOI)
|
Thank you again for the kind presentation and thank you also to the organizers for inviting me to speak at this conference and also for moving the conference online, which of course fits perfectly not only the current situation but also the topic, the overall topic of the conference. So I would like to share with you a practical example for using digital media and teaching in classical archeology and I want to present a course that I developed and taught in Vienna at the Department of Classical Archeology in 2017. The course was entitled Antike in Wien, Antiquity in Vienna, and it dealt with Vienna's antique tradition which was chosen as an example to make students familiar and have them practice both analog and digital outreach activities and research communication skills. Not only are communication and dissemination now often required for research funding, the recent case of Henrik Streeck's Heinzbach study about the COVID-19 pandemic in Ganglert in North Rhine-Westphalia and the disputed involvement of the PR agency story machine has also shown how important it is to know how to communicate your own research and thus to practice research communication skills. Another aim of the course was to enhance the visibility of classical archeology in Vienna and to highlight the relevance antiquity still has in the city today through content created by the students during the course. In the course, the students practiced three communication steps. First they developed guided tours to monuments, sites and collections in Vienna and presented them to the group on weekly field trips. Then we set up a blog and the students wrote a blog article about the place they had prepared in their guided tour. We then communicated impressions from our field trips and shared new blog articles on social media. Here we use the two hashtags you already saw in the opening slide and which I repeated here on this slide. I defined them at the beginning of the course. They were Antike and Wien, the name and the topic of the course and Ike Vienna, which is an abbreviation for Institute for Classical Archive of Wien. This was a hashtag which already existed prior to the course and had been started by one of the department's then student assistants to share department news on Twitter. I will now give you an overview of our activities along the three steps. The first step in our communication journey were guided tours. That's of course a classical analogue outreach activity which is widely practiced in museums and university collections around the globe. Each week one or two students prepared a tour which was aimed at a general non-academic audience interested in culture and the history of Vienna. Although we visited Viennese highlights and all-time favorites such as the Kunsthistorisches Museum, the Römer Museum and Theo Fielhansen's Parliament, we tried to focus on lesser known collections and monuments to advertise these and make them more known to the public. Among them were the collections of Roman inscriptions walled into the staircase of Vienna's National Library, which you see on the top right. Then we visited monuments such as the Karlskirche or the Tegelthof Monument with Columna Rostrata, which was then surprisingly surrounded by scaffolding when we arrived there on our field trip. That's on the top left. Then we visited open air sites such as the gardens of Schimbrunn Castle with its fake Roman ruins on the bottom right and the Bidamaya Cemetery of St. Mark's with a lot of tombstones featuring classical elements as you see on the bottom left. The students presented ancient statues or monuments as comparisons during the tour, for which they prepared handouts for the participants. After the tour we discussed as a group how the classical tradition could be made more visible and accessible to visitors at the relevant site or monument. After the field trips, the collection site and monuments we visited were presented in a blog, which was written by the students who had prepared the guided tours. For this I set up a blog in WordPress and I chose this site despite it being commercial, which unfortunately means that the readers will see ads which I have no influence on, but I chose this site precisely because it's free if you change to a paid scheme, you do not see the ads. But in this case I chose it because it was free, it's easy to use and it's independent of university resources, which allowed me to keep managing the site also after my contract with the university ended, which was unfortunately shortly after the course. For a blog with several authors, it's necessary to define some general criteria beforehand, to give the blog a certain consistency and recognizability. The blog was supposed to be a scientific blog, so all information in the articles had to be well researched and correct. One of the first steps when planning a blog should then always be to define your audience. And we wrote our blog for a non-academic general public interested in culture, the history of Vienna, classical antiquity and its traditions and similar topics. The aim of the blog was to make readers curious to discover and visit collections and monuments across Vienna. It should communicate relevant research results and contain all necessary information for visitors to access the sites, monuments and collections discussed. I also defined a set of formal criteria for the blog posts. The text was supposed to have a maximum of 600 words and no footnotes. In general, online text shouldn't be too long in order not to lose your reader's attention. But I can tell you that this was the most challenging of the criteria for the participants, so most blog posts turned out a little bit longer. As this was a research-based blog, we had a bibliography of one to three essential references. What you see here on the left is the bottom part of the article on the Tésois Temple in the Volksgarten in Vienna. So you can see the bibliography with two titles. Then we should have two to five informative photos. Here it was important that the authors respected copyright so that they gave the photographer's name in the caption and for pictures taken inside of museums, part of their assignment was to contact the institutions and ask for permission to publish the photos online, which all institutions generously granted. Then we had some information for visitors, like a link to the official website with information on opening hours, ticket prices and the like. And then we embedded the address as a Google map. And at the end of the page, we gave the author's name. Apart from the length, there were very little criteria for the text itself. Of course, I recommended some general guidelines on how to write a good readable blog post, like an informative title and first paragraph, immediately telling what the text would be on. Then an online text should be easy to read and more like a conversation, so simple language with short sentences and no complicated syntax, technical terms should be explained if used. And I especially encourage students to include anecdotes or interesting details to catch the reader's attention. And most important of course, to always keep in mind for whom they were writing the text. I didn't interfere with the outline, use of subheadings or how the content was presented though. This was up to the students, because first of all, I wanted them to have fun writing texts for a wider audience and exploring their own approach. This variety also made the blog authentic. After all, this was a blog written by students. For the majority of articles, this worked out really well. We had very diverse posts, for example, containing references to Monty Python's life of Brian, or one was a dialogue between Empress Maria Theresa and her architect. We didn't implement any criteria for SEO, search engine optimization though. So for example, we didn't conduct any keyword research. As a research-based blog, the facts we wanted to communicate were more important to us than the number of clicks on our website. Nevertheless, SEO can of course be a valid choice, depending on the aim of your blog, your audience and topic, and with whom you might be competing. Because a blog is all about continuously and regularly posting new content to build an audience and make your readers return regularly, we had a time schedule of three weeks between the guided tours and the publication of the respective blog article. So one week to write the post, a second week for peer feedback and revision, and a third week for a final feedback and then the publication of the article. Because I set up the blog with my private WordPress account, I either invited students to insert their texts and images in WordPress together with me so that they could try out the process and experience how it works, or I did it for them depending on the students' interests and time. At the heart of the blog writing process was the peer review, for which I set up a course in Moodle, that's the e-learning platform that the University of Vienna uses. Here, the students could post their texts in a forum where their fellow students could then comment on them. This generally worked out very well. The students commented a lot on their classmates' texts and the authors reacted quickly so that most texts could be uploaded to the blog according to schedule. In Moodle, the students also found all other relevant resources like tips on how to write a good blog post and the handouts for the guided tours. In the news forum in Moodle, I informed students when a new blog article went online so that they could begin sharing it on social media. It was their choice if they wanted to use social media and which platforms. The students shared the links on Facebook. I couldn't read these posts, but it was evident in the blog statistics that a lot of readers redirected from Facebook opened the new blog post the day after it was published. I shared impressions from our field trips and announced new blog posts on Twitter, and the articles were then also advertised by a colleague at the department, both through his personal and through the department's official Roman archaeology Twitter account, always of course using the hashtags antiga and vin and ikaviena. These consider when seeing these posts that they were written when Twitter posts had a maximum of 140 characters, thus the texts were very short. Now they're 280 characters, so you can say much more in one post. Not surprisingly, Twitter was not the students' favorite medium. I could not motivate any of them to start a Twitter account, but I'll come back to the characteristics of the different platforms in a moment. What worked out better was Instagram. Here some impressions from our field trips were posted, and students even began to use the course's hashtags and promote the blog in combination with other events, which had nothing to do with or which were not part of the course, as you can see on the right. We even had a takeover of the university's official Instagram account. That was a campaign that the University of Vienna launched for a while, that they always had someone else from the university takeover the official account for a week. It was our turn at the end of June 2017. Two students from the course shared impressions from our field trips and from studying classical archaeology in Vienna, with slightly longer explanatory texts than you usually see on Instagram. This way, the advantage was that we reached all of the university's followers and didn't have to build our own audience. The students did a tremendous job. This takeover was a great success. We had a lot of likes and many positive comments, of which I'm only sharing one with you on the top right. Let's have a quick look at the different social media platforms and their audiences. This is a study from Germany, which questioned 2,000 people aged 14 and above who use social media at least once a week. If we only look at the platforms we used in the course, you can see that Facebook is equally popular with people aged 14 to 29 and 29 to 40. While Instagram is most popular with younger people. Here you can see a really big difference in the age group. So it's 59% of German social media users between 14 and 29 years of age using Instagram compared to only 17 aged 30 to 49. Putter on the contrary is not very popular in Germany with any age group. It's a little bit different in the Anglo-Saxon world, but concerning our course, it was not a surprise that students didn't use Twitter. This is another study this time from the US commissioned by Snapchat, which is another social media platform we didn't use in the course. It looked into the question, what people do when they use the different platforms and what feelings they associate with them. So both Facebook and Instagram are used mostly to communicate with friends. Facebook is especially used for private conversations, and that's the graph on the left. Instagram in particular is used to share photos. Facebook also serves to learn about events, while Twitter in contrast is rather used for information. So to read the news, follow influencers and topics of interests. This is reflected in the fact that many institutions, politicians and the like run a Twitter account. Twitter is also used to share opinions on certain topics, and especially negative opinions are very prominent on Twitter, which can create a rather toxic atmosphere. And this may be reflected in the negative feelings associated with the platform. This is now the graph on the right. Regarding Facebook, navigative associations prevail too, though, while Instagram is overwhelmingly associated with positive feelings, as is YouTube, by the way. The major question when creating a social media account is therefore, who's your intended audience and what information do you want to share? If you want to make your work known to other institutions, funding agency, politicians, academics and the like, then Twitter may be your choice. If you are trying to reach students and share impressions from activities and events from your institution, Instagram and perhaps YouTube may be a must-have. As an institution, Facebook is good to announce events, though maybe not exclusively. Before I finish, let's have a look at the blog statistics. These are some numbers from the analytics tool which is implemented inside of WordPress. We have a total of 13 posts on the blog. The statistics show that the blog performed best in the three months the articles were published and we also advertised them in social media. This is in the center, the graph in the center, especially May and June were strong when the vast majority of blog posts were published and we also had the Instagram takeover. July was perhaps a bit weaker. That might be because, of course, we published less blog posts and also maybe because July, the whole of July is already on semester break in Austria. Although the number of readers is not tremendous, you can see that even now, after we have stopped advertising the blog for a long time, the articles on Vienna's classical tradition are still being read. Until today, we have had over 8,000 views and over 3,000 visitors. So that means that each visitor read multiple articles or at least viewed multiple pages. One view is always one article being opened. The most popular posts ever and in the past six months were the articles on the Karlsgerche and Ansichmung Freud's collection of antiquities. You can see that sharing the articles on social media had a significant impact. In 2017, that's the graph on the top right, we had 603 referrals to the blog from Facebook, 87 from Twitter and only 32 from Instagram. It seems that although our Instagram posts during the official takeover had hundreds of likes, only few readers actually clicked on the link to the blog. If we compare today's statistics, the majority of referrals come from search engines. And on the bottom left, you can see that the best day, the day with the most views ever was May 3rd, 2017. And that was the day after the article on Siegmund Freud's collection of antiquities was published. So you can see that the course was a good opportunity not only to practice but also to track effects of research communication. Some learnings from the course are that it takes time to build an audience and you have to continue posting regularly to keep this audience. Different analog and digital outreach activities can be used to promote content to different audiences. The choice of outreach activities and social media platforms as well as the content you're produced for them should be tailored to your expected audience, the aim of your communication and the style of the chosen medium. Social media posts may then be used to refer to more elaborate content on a website you wish to promote. Today, the IKAR Vienna hashtag is still used regularly on Instagram and Twitter to promote events and news linked to the Department of Classical Archaeology in Vienna. The antika and vien hashtag unfortunately has fallen out of use as we stop promoting the blog. We had a lot of fun exploring Vienna's classical tradition and sharing it with a wider audience. Students enjoyed learning about their city during our field trips and the blog provided an easy way not only to practice writing for a wider audience but also to make the students work visible and to disseminate learner generated content. This was also a motivational factor which contributed to the high quality of the blog text. I hope that through our blog we can continue to promote classical antiquities relevance and its legacy in Vienna to a broader public. Hopefully, one day there will be a sequel to the course to explore more monuments and collections from Vienna's antique tradition and to reactivate the blog and the antika and vien hashtag of course. Now as traveling is still difficult, I invite you to visit Vienna via our blog who's early you see here and I want to thank you very much for your attention and I'm looking forward to your comments and questions.
|
Video held at the online-conference "Teaching Classics in the Digital Age" 15 June 2020.
|
10.5446/51990 (DOI)
|
teaching classics in digital age. And we'd like to thank the organizers for having us here. We, my colleague, Kitosen-Ventus and myself, would like to present to you here an ongoing teaching project located at the Institute of Classical Archaeology at the University of Erlangen-Nürnberg. With this, we hope to contribute to some of the questions raised in the call for papers of this conference. The use and development of computer-based methods as, for example, the well-known gene-phematic systems form an important reason, as well as prospective field and classical archaeology. However, when it comes to ancient images and their cultural meaning, there's not many research applying computer-based methods which could support the complex cultural analyzers of images, their relations, and so forth. In an interdisciplinary research project in Erlangen, named Iconographics, which is funded by the Emotion Field Initiative of the FAU, together with Peter Bell, Andreas Mayer, and Ute Verstegen, we are able to take first steps into this direction. Let me first mention a few basic facts here, which are important for our teaching project. As it is a collaboration between traditional humanities, focusing on image studies, on the one hand, this is art history, classical archaeology, and Christian archaeology, and on the other hand, informatic sciences. This is computer vision here. We address the question how complex image structures and narrative image systems could be analyzed in a much more comprehensive and maybe even quicker way with the help of computer-based methods. In doing so, we wish not to replace or imitate the human researcher, but aim at helping him or her in his or her work. For example, by providing additional information in situations of comparison, where similarity traditionally starts on a much more subjective basis. We are therefore developing methods, especially fitting for images. We focus, for example, on the relationship between the protagonist and the vast paintings, which made use of specific gestures and postures, as you can see on this slide. We try to characterize this scheme in a formal way in order to find visual similarities within the vast corpus of vast paintings and furthermore to reveal meaningful image relations that help to understand the cultural meaning of the scenes. But let us come back to our topic here. Because we are in the face of developing and testing, we are not yet able to teach the usage of a functional tool. However, this doesn't mean that we should exclude students from such ongoing research. On the contrary, we confronted ourselves with the question of how we can enable our students to take part in the fast-moving world of digital classics apart from the common use of digital tools already existing. Linking research and teaching in a specific research-based learning is a main goal in academic curricula, and it is the most promising approach for the demands of the working in new field of digital classics. Since the future of young researchers will not be only include the use of existing digital methods, but much more the development of new ones within interdisciplinary teams, we have to choose appropriate didactic models already in the students curriculum. For being a digital classicist, the required qualification profile for a student of classical archaeology has to change a bit. As future experts of the discipline, he or she will, of course, have to develop relevant research questions, detect methodological problems, recognize and evaluate promising approaches, discuss them, and contribute to research networks using the specialized knowledge. But additionally, he or she has to do this in an environment of new promising computer-based methods and possibilities, which he or she normally can't develop on his or her own, but in cooperation with computer scientists. Because of these endobilism projects, the students need to be trained in self-determined project management and in teamwork, where the sharing and development of individual and collective skills is indispensable. Therefore, academic teaching at university is confronted actually quite demanding challenges, as we have heard before. Now, opinion the students should benefit on ongoing research in the field within regular academic teaching at only degree thesis. We definitely looked for a didactic model and possibility involved them in the active development of digital methods strongly connected to a traditional research question. Our opinion, this is of high importance, because the students equally should increase the knowledge of the subject, an appropriate model, methods, as well as develop further academic and professional skills. For us, these conditions could only result in experimental course. The students should get by themselves the experience of trial and error while making their own decisions. Furthermore, we searched for a fitting didactic model, which encourages the creative potential of the students that is highly needed in future work. Connected to a research project, we develop in this summer term a digital course, which should, so we hope, try train some of the skills mentioned before. Torsten will now take over. In the beginning, he will focus on our choosing teaching methods and will come to a detailed description of a course before I will conclude with some more mark. Please, Torsten. In order to support the aforementioned skills and competences of the students, in a curricular environment, we have chosen a simulation game as a teaching method. Following our engines in China and centuries of usage predominantly in military strategy, the simulation game was first applied to a university environment in 1908 in the context of teaching economics at the Harvard Business School. While slowly also other fields of education, like political studies, applied it since the 1950s, its usage is still fairly rare in the humanities. A simulation game is a verbal model that offers the opportunity to simulate social conflicts and processes of decision making. This happens within a safe, pre-structured teaching space that demands cooperative forms of learning on behalf of predefined roles and groups. The students are realistically and effectively confronted with the complexity of situations and structures of action, reflection, and decision making, in which they have to act responsibly and according to their roles that might demand skills and character traits contrary to one's own personality. Consequences, results, and repercussions are experienced directly and visible quickly. At the same time, the students learn and train also the specific classical archaeological contents and skills within a highly practice and output-oriented curricular framework. While its execution also for us was a newer and unconventional challenge, we have chosen this deductive model method because it strongly supports the aforementioned multiplicity and interdisciplinary of methodical and subject-specific competences and skills that we wanted to target. The profile of a learning objective, according to this competence model, distinguishes between and appeals to multiple dimensions of competences on the one hand, as well as knowledge, values, and attitudes, and skills on the other hand. This model, we deem suitable for the complexity of our very own learning objectives and satisfying results seem to be expectable on every level. Just to name a few, on the level of social competence, the students' ability of cooperation and compromising is trained as well as their assertiveness is challenged. In terms of subject-specific professional competence, they had to apply and increase their knowledge in the field of classical archaeology. And methodically, the students will, for example, have first experiences with methods of digital art history. We created the fictional research project Digital Object Recognition and Iconographics, in which both lecturers, Koina and me, function as PIs. The project is thankfully supported by the computer vision experts Dr. Warnack Kosti and Padmeh Matu, and consists of two sub-projects. The first, entitled Understanding Iconography with Computer Vision, investigates the themes of Heraklase and Apollo's struggle over the tripod as a case study for mythological scenes in attic-based painting of the sixth and fifth century BC. Research questions here, where, for example, which stuff is always or mostly present in this particular scenes, which objects occur and how do they contribute to correctivizing and identifying the scene and its key figures. A second team forms the sub-project recognizing objects and their significance in attic-based painting using deep learning methods. This team has the task to investigate the occurrence and function of certain objects in black and red figure-based paintings, amongst others the club, the tripod, and the line scope. Their research questions focus on variance and diversity in the depiction of these objects. There is association with certain figures, as well as their situational or attributive occurrence. Intentionally, we've chosen objects here that also occur in team 1's tripod scenes in order to support comparability and cooperation between the teams. Each team consists of a team leader, a scientific communication management, and three scientists with different areas of responsibility. Besides homework-like tasks of reading and summarizing relevant literature, the individual roles of the scientists, managers, and team leaders demanded collecting images, collecting data, annotating, analyzing, evaluating, as well as conceptualizing and organizing respectively. While this sounds strongly humanity-centered, both sub-projects were directed to primarily involve and evaluate given state-of-the-art digital methods and the benefits of their usage. This, on the one hand, includes using basic techniques of digital humanities, like labeling and annotating within images. On the other hand, and even more importantly, it was required to not only think about and reflect on possible benefits in classical archaeology, but to directly communicate and cooperate with the computer vision experts. This necessarily means to change perspective, to deal with a different view on the material and images, to comprehensively express your wishes and demands, and to cooperatively find solutions. Given the limitations of the curricular framework of a training seminar for advanced students, we structured a simulation game according to Heinz Klippert's Phasing, which includes an introduction, a phase of informing and reading, formulating a concept, and planning strategy, interaction, preparation and execution of a meeting in planery, as well as a content-specific and personal process-specific reflection and feedback phase. We adapted this game for a purely digital seminar that consists of three on-block virtual Zoom meetings and in between group-working phases for which we created a communication structure at the university's online teaching platform Studa. In the first out of three virtual meetings, we introduced a scenario of the game to the students. Tutorials and other materials were handed out, groups were formed and roles given. So-called team cards informed the students about each subject's questions, tasks, goals, and schedules. Additionally, we created role cards for each of the five predefined roles in each subproject. In the first group-working phase, the teams individually worked on their tasks and developed strategies to target their respective research questions. In this phase, the team collected images from the online databases of the Beasley archive and Acubit. On their own, the scientists decided which metadata for these images is significant to their questions. Then annotations, markings of significant objects in polygonal or rectangular objects, boxes, in the collected images were made, which should then be sent to the computer vision experts in order to train the computer model. 50 images, including metadata and annotations, were demanded to be sent to a given deadline. These training images then serve as data for the computer vision model to recognize the annotated objects and give predictions in other images. Furthermore, it was the responsibility of the team leaders and the scientific management to organize our second virtual meeting in the form of a workshop, in which first results and problems were presented. As guests, the two computer vision scientists joined this meeting. We are very thankful for their voluntary engagement, as they not only personally gave a feedback for the students' work, allowing corrections and adjustments in the second group for a campaign. Also, especially interesting was the fruitful discussion in which the students could ask the questions about the requirements, possibilities, and limitations of the digital methods in supporting their research. Here, a very realistic situation on an interdisciplinary research project could be simulated, in which very different perspectives and approaches collided and communicated. At the moment, we are in the second group working phase. At its end, scientific summaries, final reports, and various fictional press releases are required to be written, as well as the complete collection of images, metadata, and annotations to be provided. Finally, a third virtual meeting will be held to show and discuss the final results, both of the computer model, as well as of the conventional classical archaeological work. We will, of course, also speak about how the one can contribute to the other. In these phases, it will be deductically important to not only give feedback and discuss the subject-specific research content, but concludingly also reflect on every participant's individual experience in this simulation. Thank you, Thorsten. As you have heard, many different reasons had led us to why we have chosen this type of data model. Although the final reflection is still to be made at the end of the term, we have the impression that most students are much more motivated to learn and work in this type of course, because they see that their own work contributes to the team's project goals. Furthermore, we ask ourselves if this type of data model will hold up the motivation of the students to learn constantly during the course, especially under the circumstances of the COVID-19 crisis when students had to organize their learning system in you. We heard from many sites that a sudden change from normal to a complete digital learning environment often has created motivational problems, which increases the well-known conflict between the different learning types in the course. In our first evaluation in the middle of the term, however, the students have claimed that they were much more motivated to learn constantly. So we concluded the requirements of teamwork with specific goals and the person's responsibility for a specific topic, which is needed by the others, lead to constant and active learning. This preliminary experience of an efficient teamwork is very important for planning a future application of such a didactic model, because the lecturers should not interfere in the process of the projects, even when the team would not proceed with the working system. At the moment, we can say that it works well in terms of constant learning and cooperating as a team, even if it is a virtual course and we never meet personally. Within their groups and with a specific role, we have seen students which are much more creative and communicative than we thought from our experience in other courses. This is a good personal experience for us as lecturers. On the other hand, however, we had to learn the both parts. Students and lecturers had to work more than for an usual course. And in addition, we needed help from our colleagues of the pattern recognition lab in Erlangen. We had to prepare every single step in the beginning with texts and materials and to think of possible ways before starting the course, because after starting, we should not involve as normal guiding lectures according to the didactic concept. More always requires much from the students, not only unexpected workload, but also openness for an unusual learning process, which would not fit to every student, because some of them had to take over roles, which seems not to exactly fit their personal competencies and personal characters. We're very grateful, therefore, for our students to having paid part in our experience. Although we already see the benefits of the didactic model, we said ahead of different problems regarding, for example, possible implementation of such courses into traditional curricula or the question of how to give marks, which contradicts in some way the idea of trial and error. Finding space in the curricula for such experimental approaches isn't easy, because we have to focus intensively on the training of other important competencies and deep knowledge in our discipline in the first place. This is a problem, especially for combinations of external or dual-subject bachelor programs, which are typical for our climate failure. These compounds limits the number of courses. It could be also possible to include such a project in some open space in the curricula designed for practical experience, where normally excavation museums work where else is put. Despite the problems, I mentioned it is worth a try, because we would like to emphasize that this is a possibility to promote uni-scientists in digital classics in a very early stadium of their future career. In our opinion, research projects should implement not only a promotion of young scientists from their PhD studies onwards, but much more acute regular students during the bachelor or master degree. Connected to an ongoing project, one can secure that they require complex infrastructure for a simulation game that is already existing. Furthermore, everyone benefits from such a constant discussion on the research project at different levels. At the end of this presentation, Thorsten, I hope that our model of simulation game will stimulate an interesting discussion about how to implement actual research in the field of digital classics into our everyday academic teaching. Thank you very much for your attention. I'll be curious about your questions.
|
Video held at the online-conference "Teaching Classics in the Digital Age" 16 June 2020.
|
10.5446/56986 (DOI)
|
you Good morning. My name is Dianne and like most of us here today, I've been involved with a variety of free software. I mostly train traditionally as an architect in Australia and I work in the building industry. So when I say architect, I'm referring to the brick and mortar kind, not the software architect kind. So today I'll be talking about free software building information modeling or BIM and how it relates to architecture, engineering and construction. So you guys have probably heard of architectural scale projects that have used free software to design and build them. And if you haven't, I'd like to highlight two which are super awesome and you should totally check them out after this talk. And the first you should check out is the WikiHealth project where you can build your own almost flat pack house for roughly $2,000 per square meter. And here's one example of a WikiHealth based project done in FreeCAD called the WikiLand project. The second one is the open source ecology project. And this guy made an open hardware design to create a brick press. And from that he created a whole suite of blueprints to manufacture and build all the machines for societal function like a bulldozer or cement mixers. And right now he's working on the micro house project, which is a modular housing design where you can build housing for $200 per square meter. And that's a factor of ten cheaper than the WikiHealth. But unfortunately today the guys behind those awesome projects are not going to be presenting and you're stuck with me. So I'm going to talk about free software in medium to large scale architectural projects instead. So on a domestic or small scale project, usually it's easier to use some free software because the team is smaller and you can revert to traditional 2D CAD or use hand calculations for the engineering side. But when you start scaling up and we're talking, you know, hospitals and laboratories and shopping centers and mixed use urban developments, this is a little bit of a different story. And that this scale we're dealing with many thousands of drawings produced by multiple companies who are leaving and entering contracts. And this whole process usually goes across multiple years. So when we talk about large building projects, pretty much everybody relies on proprietary software. And most people haven't even heard of free software. The software vendor market is dominated by Autodesk Monopoly and the digital data that's created is all stored in proprietary data formats. The industry knows this. The users know this and they don't like it. But in fact, just last year there were over 100 UK architecture firms which signed an open letter about the dismal quality of proprietary software addressed to Autodesk. But these firms can write a letter but they have nowhere to go. There's no alternative to switch to. And this represents a huge opportunity for expanding the scope of free software. But before I go into some of the super cool free software that's recently being developed, I'd like to paint a picture of just how fragmented and diverse the industry is. And by the end of this talk, I'd like to communicate that to develop free software enough to deliver large scale architecture, there are three things which need to happen. The first is that we need so much more than just CAD. We need a huge variety of tools to cover all the different tasks that need to happen when a large building comes together. And I'll talk a little bit about the different disciplines involved in making a large building to help illustrate this. The second is that we need to collaborate a lot more and integrate free software together. So from the first point, yeah, we need a lot of different software, but we need the software to be interoperable. Otherwise, we can't achieve the workflows needed to manage our built environment. And for this, I'll talk a bit about open data standards and what free software is available for doing that. And finally, we need a bit more community building because most of the industry doesn't know that free software is an option here. And so for the free software that's already mature, we just need to let people know about it more and share what we know. And then when we start working on the stuff that's not yet mature, we need a really big room for people to talk to each other. The guy writing code needs to sit next to the guy laying bricks, and he needs to sit next to the guy running an energy simulation. And this all needs to be fed back to a guy who's writing tutorials on how to do all this stuff. So we really need a vibrant community that's not specific to a single software or not specific to a single discipline, but across multiple software and across disciplines. So let's start with what disciplines are involved in a project. And one of those is this guy, of course, and he needs software which does this stuff, and that's all right, because there's free software which does this kind of stuff. But architects don't just do this. They actually use a lot of artistic tools that overlap with the CG, VFX or gaming industry. So software that's not at all to do with CAD, like Krita or the GIMP or even gaming engines like Godot, are actually really important to their workflow. So hopefully you can see that just the architect already needs a wide variety of software. That's much more than just CAD. And on a large project, there are even more requirements. And this is just one discipline. So we need a huge amount of tools to support just an architect's workflow. And the tools we might think that free software already has covered might actually not be practical on a large project. To give an example, CAD tools need to generate 2D drawings from 3D models. And this is a feature which exists in free software like FreeCAD, but it might not be able to scale to a large project. Because on large projects, you need to combine five or six models together. And each one of those models is produced in a different software, and each one contains tens of thousands of objects that we're managing gigabytes of geometry, and maybe it's not there yet. And then there's stuff like asset registers. For a building owner, an asset register is much more valuable than a fancy 3D CAD model. So we need spreadsheet or document management or CRUD type applications, not just CAD. Or for another example, something like PDF markups. It seems simple, but it's actually really fundamental to our workflows. And there's no free software that I'm aware of which implements the full measurement annotations in the PDF spec. And then of course there's things like 3D PDFs, which is a problem which probably won't go away, but hopefully we can try and ignore it as long as possible. But that's just architects, and architects don't work alone. They work with all of these other guys, all these other designers, and well, because they're all doing kind of design stuff, there's some overlap in the tools that they need. But obviously each one has their own quirks and showstoppers that determine whether or not the software is suitable for them. But there are also other groups of disciplines. And when you cross to another group, the tools overlap a lot less. Here we need a lot less CAD stuff and we need more GIS and surveying and site feasibility design stuff. And just like architects have specific requirements when you apply it to a large project, it's exactly the same when you look at these guys. So even though we might have general free software, let's say for laser scanning or point cloud reconstruction, we could still be missing key features which are a showstopper. Like the ability to segment the point cloud into building objects like walls and columns and compare them object by object to a 3D model. And of course, buildings don't happen without engineers. And most of these engineers, I'm not going to pretend that I know what they need, but I will just highlight the sustainability consultants. Because these guys need simulation engines for energy, lighting, and CFD. The good news is that we've got really amazing free software which does that like Radiance, Energy Plus, and Open Foam. But the bad news is that you need a PhD in reading manuals to use them and we could do a huge amount better on the UX side of things. So although these functions are the gold standard, it's really difficult for users to use. And then there's also stuff that's really important for the built environment on a larger scale that's got nothing to do with CAD. So things like climate change projection analysis, or open data standards on how to track material lifecycle impacts, or supply chain management software to deal with modern slavery. So here's a few more groups of people who might get involved in a large project. And this is obviously not a comprehensive list, but I just wanted to highlight just how diverse the software is that we need. So here's a few final interesting examples of free software that already exists, but you might not consider you will need it when you design a building. Like explosion simulation, which is needed by security consultants when you're doing a defense building. Or visual node programming, which lets architects generate building shapes. Or real-time point cloud capture for certifying construction. So all of the software ideally also needs to collaborate and integrate and interoperate to make a building happen. And this is really important because large projects are increasingly relying on digital workflows rather than reading from traditional 2D paper printouts. But the reality is that right now everybody is stuck on proprietary software with proprietary data formats, so the tools don't work very well outside each walled garden of each vendor. And they don't share data. They don't really follow international open data standards very well. And so the majority of the industry still works in isolation. And this brings me to my second point about the improved integration of free software in our built environment. So when we attempt to build software across these very diverse disciplines who need to collaborate, international open data standards play a huge role. And these standards revolve around a concept called building information modeling, or BIM. And the way this BIM concept works is that instead of just having geometry and CAD layers, you now have a semantic database of objects like walls and doors and windows. And these objects may have geometry associated with them, or they may not. But most importantly, they hold relationships like what room they're part of, or the fire rating, or when they're going to be built in the construction sequence, or which organization is liable for its performance. And these relationships are special because they extend across disciplines. So when we integrate free software for the built environment, we need to integrate not just by sharing geometry, but these relationships and properties that make the geometry meaningful to each discipline. So because most of you have seen FreeCAD before, here are some screenshots of the BIM functionality inside FreeCAD. And you can see how certain relationships are exposed to the architectural discipline. And I just want to highlight that BIM data can get really quite complex. It can include things like simulations or construction timelines or linking to building sensors. Most of these BIM features you see in FreeCAD are based on a vendor-agnostic international open data standard for BIM, known as the Industry Foundation Classes, or IFC. So all the data that FreeCAD is adding to the building model can be taken out of FreeCAD and analyzed in other software. So for example, you can use Codeaster to perform the structural analysis or a tool called IFC-Coby to create the maintainable assets registry. By ensuring that free software implementations comply with these open data standards, this really helps improve interoperability and provides a way for proprietary users to start incrementally switching to free software. Although all this BIM and IFC stuff is quite a niche topic, it's increasingly a fundamental topic that our industry relies on for a built environment. So many governments are now mandating BIM technologies and projects, and the majority of large developments rely on BIM. And this is really exciting because the free software implementations of BIM standards are actually much, much further ahead compared to proprietary solutions. And FreeCAD actually has one of the best support for BIM data standards in the industry, and to do this, FreeCAD uses a library I'd like to introduce called IFC OpenShell. IFC OpenShell started roughly 10 years ago. It's a C++ library based on OpenCascade, and it lets you read, write, and analyze this IFC-based BIM data in a variety of formats. It's got about 85 contributors, there are a few core developers, and over 600 stars on GitHub. It's also used under the hood in these tech startups, and it's starting to be introduced in university courses. To give you an idea of how IFC OpenShell works, here's some previews of the Python bindings, which you can use to create data and relationships. But data and relationships is only half of what IFC OpenShell does. The other half is about geometry creation. So geometry in the AC industry is really, really varied. So for example, you can use solid modeling, and that's really good for modeling reinforcement bar or steel framing or basic walls. But then sometimes you'll also use meshes, which are really good for heritage reconstruction or conceptual modeling or archfiz. And sometimes you want to use really specific objects because you want to quickly derive data about it, like I-beams or square sections, or things like rail alignment curves, which have really specific constraints that vary from country to country. So no geometry kernel, of course, has an I-beam of primitive data structure, but this is how our industry thinks. IFC OpenShell provides a really good set of tools to tessellate and expose these geometries across different applications in a standardized manner. Another free software that uses IFC OpenShell I'd like to introduce is a more recent one called the Blender BIM add-on. And this provides BIM functionality as an extension to Blender. The Blender BIM add-on also recently won the BuildingSmart 2020 awards in tech, and this is the highest award available in the field of BIM. And this kind of recognizes the potential that free software has to become the norm in this industry, not the obsession. So I'd like to show you some cool things of just how IFC OpenShell can turn Blender, which I guess is traditionally more an artist's tool, into something that we can design and manufacture with. So the image here that you see is modeled in Blender, but because it follows BIM open data standards, we can bring it over into proprietary software. And even though it looks kind of artistic and sketchy and conceptual, it actually contains enough semantic data to start scheduling out components. So here's another example. All the buildings you see are actually generated from some evolutionary algorithm, which understands things like solar access and ideal circulation and comfortable areas and volumes. And it's not just for fun. This is actually really, really useful for doing feasibility studies, where you don't need fully detailed designs, but you do need to test out spaces and spatial relationships. In this example, you're looking at my living room, and this is not a photograph. It's actually a render. But unlike other renders, every object in this image is not just geometry. It actually is semantically classified and has BIM relationships. So all of this data can be accessed in Blender or FreeCAD or even the proprietary software that's currently being used. And to prove that this data is semantic, this rendering is actually not a traditional CG render, where you use a color picker and you pick up textures and adjust the lighting. This is a validated lighting simulation. So data from material samples were numerically input and semantically assigned to make sure that it's not just photorealistic, but it's also photometrically correct. In case you're curious, this uses radiance, which is a free soccer engine for doing lighting simulation. And on the left, you can see the render, and on the right, you can see a photograph. So a more common example of light simulation is in solar analysis. So here we have a visualization of sun positions throughout the year and a heat map of sun hours across an analysis period. This uses another free soccer called Ladybug Tools and SphurJoc, which allows users to do visual programming within the applications. And this is really exciting because it encourages users who may not know how to code to put together little pieces of software themselves, which introduces them to the flexibility that free soccer gives to users. Here's another example where you have a beam modeled in Blender and by using IFC OpenShall, the BIM data was then translated into a structural model for a codaster. But because it uses this open data standard, the model could have equally well come from FreeCAD, so you can use the best tool for the job. And of course, here we have drawings. We need drawings to be generated from BIM models. So again, what you're seeing here are not traditional drawings where you might draw a line than polygons manually. These are automatically cut and generated from 3D objects and semantic data, and it's generated directly from the open data standard. So you can create this model with Blender or FreeCAD or whatever you want, which is actually what happens in the industry because people need to use different tools. And all of the annotations can be generated agnostic of the authoring application. And here's another example, which is not just about geometry. Because of all of the rich data you get in your BIM model, you can convert contractual requirements into standardized unit tests. So you can start doing QA checks when you have two or more different disciplines collaborating. So one thing I'd like to highlight about what I've shown you so far is that a lot of this is based on an open data standard. And so a lot of the code can be shared between many CAD offering applications, as long as the output is a standards compliant BIM dataset. So this unit test auditing tool actually works equally in Blender and in FreeCAD, same with the drawing generation. It works equally in Blender and FreeCAD, and the same with the structural simulation or the environmental simulation. All the code can be reused. And the authoring application is just really an interface to portions of the metadata as well as geometry. And this is really exciting because you can start mixing different tools with what they're really good at, like solid and nerds, modeling and FreeCAD and meshes in Blender. So I hope I managed to show you a few cool things, but there's really so much more which needs to be done. So at the beginning of the presentation, I showed all of these various disciplines and a lot of their use cases are still not yet covered. So here's a small arbitrary kind of incomplete list of things that we still probably need to be developing. And hopefully over time we'll start covering more and more of these use cases. So one of the things which prompted this presentation today is that in the past year, an amazing amount of work has happened in creating FreeSoftware for our industry. And it's really exciting to see it happen. And one of the signs that shows that this has happened is that if you rewind a few years ago, many people in the AEC industry didn't know what FreeSoftware was. But just last year, a new community called OSRJ, or open source architecture, started up. And although it says architecture, it's really about the whole built environment and covers all these disciplines. And before this community existed, we didn't really have a community for people to discuss FreeSoftware that was across all of these disciplines. Instead, we had people discussing and doing really great stuff, but in these pockets in the industry. And in just under a year, we're now at over 700 members and there's a Wiki and a forum and a news site and a growing collection of articles about how to start adopting these open data standards and how to start switching to more free software. There's also an IRC channel where there's usually 20 or 30 of us online. But most importantly, the people involved in OSRJ are not just developers. They are people who are working in the industry and this really makes a huge amount of difference because it helps connect both users and developers across multiple projects on real life workflows. And the practical result of this is that there's a lot more code sharing and communication between developers. So to end today's talk, here are some links and credits of the really cool recent stuff that's been happening, as well as how to check out the OSRJ community. So thanks for watching and let's help change the industry. So, Dion, I think that we are going to be starting here now with the Q&A. So, first of all, thank you for an absolutely amazing talk. This kind of is really opening some of the areas of the CAD dev room. I'd like to start by asking you a couple additional questions that came up about how this interoperability can be expanded. Is it just a matter of if the software package wants to make itself useful to the BIM world, is all they have to do be able to digest and import, export this IFC standard that you were describing? Or is there more to it? Yeah, I think part of it is being able to speak the same language. I just want to clarify that it won't necessarily be an import-export because it's not, we're not talking about a file format here, we're talking about a schema and set of relationships. So as long as everybody standardizes the way that they record certain relationships, then at the very least we'll be able to interoperate on some of the basic workflows. Of course there'll be niche topics, but at least we can cooperate a bit better. Okay, no, that makes sense. So one question here from Julian Todd is, so what do you see as the necessary elements to bringing some of the larger civil engineering companies into the open software or the free and open software area in terms of their dollar contribution? So they have a lot invested, obviously an individual project can be a strong driver there for them. So what's the impediment and how do we go about clearing that? My background is actually as an architect, so I would be the wrong guy to answer this question. But that's what OSR just wants. Please ask it again over there, but I do also feel, and I could be totally off the mark here, that you're absolutely right, that in terms of civil engineering there are less free software available and the maturity level is much lower. Whereas for architects, it's a lot further ahead. And then each discipline is a bit plus or minus like cost planning, it's also very much behind the times. So I think one by one the disciplines will start to catch up. And as soon as everybody starts needing to speak the same language, that's when the incentive will grow for people to play to start adopting, but at the very least not just free software but something that can interoperate with free software, because that's the current state of the ecosystem that we're stuck in proprietary silos. So the moment we can start speaking the same language, we can incrementally switch discipline by discipline. And so related to that, how well do these, these proprietary systems speak the, the IFC relationship model. And generally, not very well. But it's the, and I think that's just symptomatic of how are the culture of the AC industry, because it's so diverse it's so fragmented. And it's the least bad thing is the best way to describe it. This is the closest we've ever got to agreeing on on some sort of standard. But there is a trend that people are seeing that it's growing and growing extremely quickly, especially being pulled by clients and big government clients who say no, we don't want a proprietary data set after throwing all this money at this kind of dollar project that will expire in a few years. So it's really being client driven that, whether they like it or not, people will need to deliver contractually high quality of this open data standard. Well, that's, that's an interesting point. So if you, if you were to project ahead, are we looking at the, the evolution of these, of these open BIM projects being client driven so clients specifying and perhaps even putting their funds where that where that is, is this something that is on the radar of clients or are we still in the infancy where we were not quite advanced enough for that to become a topic that even comes up between, between customers and and the say the architecture field. I think on a already significant number of large scale projects, clients are contractually requiring this data. But the issue is that nobody has the technical means, due to the proprietary ecosystem to, to work out whether the client is actually what they've asked for. So it's turning into a bit of a checkbox exercise. So everybody's saying, oh, look, we're doing BIM, we're doing data, look at all the data we're generating, but in reality, all the data is pretty garbage, they just don't know it yet. But slowly people are being clued into this. And so when you give the clients the tools to inspect, they'll say, hold on. And there I have witnessed many scenarios where fees have been withheld simply because they, they got the smarts to look at it and say, hold on, you know, this isn't quite right. And so that will be a client side poll. But I think it's also happening to the people in the industry, like the architects and the engineers, speaking from the perspective of architects, they really do mean well, they want to create really neat data sets because it helps themselves. It's just that they don't have the tools. There isn't currently a free software equivalent that's as mature as the commercial offerings for large scale projects at the moment. But the moment that something starts coming close, I think we'll start seeing the smaller firms, people doing commercial products start switching over, and that will sort of snowball, I believe. That, that makes sense as we, as we get more coming in and more mature soft, more mature software, everyone, everyone starts to coalesce around the most useful aspect in the room, which brings me to one one point that I didn't quite see in your talk is what sort of, what sort of legal compliance do does the, the open software packages need to need to provide in this so say we're doing simulations, for example, where we are actually trying to generate data that is used to show that the building won't fall down in a high gust of wind. The, that with the larger packages, you kind of have this idea that no one ever got fired for, you know, for buying IBM or Autodesk or that is. So, how do you see that, how do you see that hurdle being addressed in the open software field. From the perspective of architects, we don't get paid based off our performance. I mean, I mean, if the building looks kind of funny, it's like, oh, well, anyway, I digress. I guess, all we're talking about here is a way to interoperate a bit more that's what BIM is about it doesn't replace the fact that there are many, many sub disciplines with their own standards of quality to, to assure. So it doesn't change the fact that you're still using FEA or you're still using a particular light simulation engine. All those are totally agnostic of the open BIM concept. So, yeah, hi. Well, Dionne, thank you so much. This is this has been absolutely fantastic. I'm hoping that let's see. Wayne, did you catch any additional questions here I'm, I don't see any now. So, Dionne, would you like to leave us with a with with a closing thought. Well, just I guess another shout out to us arch.org because that's where a lot of people are helping like from incredibly diverse backgrounds coming together to say, Hey, you know, let's let's talk about a new ecosystem of tools that talk to each other. And so please check that out so that we can continue to build stuff. The free CAD guys are there. And so, come along to excellent, excellent, fantastic. And we'll post the link here for the for this chat so as soon as our schedule moves over which I think, oh, I was off by five minutes again. So, we, we have a little, a little extra, a little extra time here so sorry about that I was trying to trying to wrap up wrap up the Q&A session a little bit, a little bit early, they're not going to open the chat room here until we're until we actually move through the Q&A time so let's, let's see the inside of open arch. What, what your, what you're kind of describing is this collaborative meshing of different of different platforms and you had a slide up that showed all of the different niche, niches that maybe haven't been addressed at all. And those is their room for developers who might be interested in this to kind of come in and build a library that, that talks IFC with the, with the larger community inside that niche and then kind of what, what does it take for them to, to tie into this, this overall ecosystem. I think that's the beauty of it because there are just so many things that are lacking so there's so many things that you can work on, and you don't need to be a guru in anything although if you are, then all the better. So, there are of course well known projects like Libre DWG and the 2D CAD world, which we still see a lot of use in our industry. And unfortunately, it's still just not quite there yet. So, and then you'll get really niche things like we need better web viewers, or we need little utilities, which just run more audits, or we need simple types of spreadsheet generation things and people to build templates for costing on different places of the world. And so, there's very simple crud applications as some of them so there's a huge variety of tasks which I think, no matter how good your your coding abilities are or even not just coding just like your knowledge of local standards and, and just trying to becoming a power user is extremely, extremely useful. That's an interesting, interesting point because in a lot of jurisdictions in at least within the United States the building standards are not generally available. At least as a, as a an open system so how, how would you, I mean this is perhaps a little left field but how do how do. Thank you.
|
BIM (Building Information Modeling) is a paradigm for 3D CAD models made for Architecture, Engineering and Construction (AEC). Long a closed, proprietary garden, it becomes more and more an open, hackable world thanks to several Free and Open-Source tools and formats. This talk will try to illustrate how rich that world has become when your tinkering, hacking, coding itch starts to scratch... In this talk, Dion (developer of BlenderBIM) and Yorik (developer of FreeCAD, specifically its BIM tools) will use these two applications and try to show some clever tricks that you can do with BIM models, that no proprietary software would dream to achieve.
|
10.5446/52404 (DOI)
|
Hello everyone, my name is Adam and I'd like to tell you about the new features in Catechier. So let's start with what is Catechier. It is a Python model for building parametric 3D CAD models using a boundary representation Catechier. Specifically OpenCascade. Here to give you a feeling of how one uses Catechier, you can see an example of a Catechier code generating this mounting block model. There are a couple of things to notice about this code. So first of all, Catechier allows to define models in a fully parametric way. All the important properties of our model can be parameterized. And second of all, it tries to implement and use all the abstractions available in the current standard CAD software. We are working on a kind of programmatic feature tree in which we define work planes. You know, on those work planes we define two of the entities. In this example, rectangle in the circle. Those entities can be converted to solids by means of, for example, extrusion or other 3D operations. And once those solids are in place, we can address or select certain elements of them. To define new features, new work planes and develop our model further. So I hope this gives you a, let's say, a better overview of what CAD query is and how to use it. It does offer quite an extensive modeling capabilities. So on 2D Primitives front, we do support various items, including splints, parameter curves and since recently offset curves. We also ship 3D Primitives available. Those are not the basic, let's say, way or not the design way of using CAD query. Obviously, we do support CSG operations because of the fact of being a Python model or Python library, not a point and click user interface-based CAD tool. This is the analog of point and click selection of model elements in a normal CAD tool. Selectors can combine logically and also change for additional flexibility. And since recently, CAD query supports tagging of elements, this means that we can refer to elements that are deep down of our modeling history of our modeling feature tree. While being a 3D modeler, CAD query supports a wide range of 3D operations. So extrusions, revolutions, sliding, shilling, fillets, jumpers, sweeps, 3D text and filling. The last one means defining arbitrary surfaces based on the bounding wires. At last but not least, CAD query has a wide variety of input and output formats. As can be seen here. What is specifically new in CAD query 2.1? On the infrastructure front, we removed the openCascade 7.4 and also included custom binding for it, OCP, which is using Python 11 under hood. And nowadays, most of the code base is templated. It's checked using my PNG integration. So this has two advantages. For the developers, it allows to find bugs early on when adding new features. And for the users, it allows easier and faster understanding of what cannot be expected from certain CAD query methods or objects. On the functional front, we support now both reading and writing of DXF files. So you can define a XF sketch. We can just take it from somewhere, import it to CAD query, build a 3D model based on it, and then slice this model and export the elements again as a DXF file. Or say presentation or fabrication purposes. As a redimension, we have added tags and also some additional select have been implemented as well. And finally, coming to the main topic of the talk, there is a new class for representing assemblies in CAD query. And also this includes constraint-based placement of objects in those assemblies. So more on the CAD query assemblies. Well, it was defined or designed in a few goals in mind. So to keep it simple and stay lightweight, to be able to denote core and location of the assembly objects. And finally, to allow an arbitrary test so that we can define assemblies that have double assemblies in those sub-assembly sensors. The API is getting consciously simple. So we can add objects in assembly, we can add constraints in assembly. As the constraints are defined, you can request for the assembly to be solved and the object position to be updated. And once we are happy with the result, we can save it to one of the two support formats. And note that this API is fluent. So we can call this method a chain or no history scan. All the modifications, all the additions for the constraints happen in place. Here you can see an example or actually a prototype of the assembly class. Russia does not points to keep in mind. So for a given assembly level, all the names have to be unique because the names are used to address the assemblies when adding constraints. Keep an assembly object, kind of hold a cut or a CAD query object, but that's not required. So if it doesn't hold any object, then it means that it's sort of a grouping entity. And finally, the children are on the list of children of the sync time. So this implies that we do that. We do support nesting assemblies. So let's take a look at how to use this class. Let's start with a manual assembly. So in this case, we're trying to model such a tray. This is of course not the full code. The full code can be found online. Just to give you a sense of how to use manual placement. So we do need to start with the base object. Here in this case, that's literally the base of our tray. And then on by one, we can add the children objects. So in this case, the edges of the tray. And when we do so, we can define the location. Location is rounded. So it's specified in respect to the location of the parent. In this case, parent has no explicit location stated, which means that it has the default location. So the center of the default current system with no additional rotation. Another thing to notice is that we can specify color. The color is inherited. So if the children do not specify color explicitly, they use the color of the parent. So that's why the whole assembly has this yellowish transparent tint. We can also use constraints to place objects in the assembly. In this case, we're training models such as a door object. This example is taken from the category documentation so you can look there for the complete listing of the code. So the main thing to notice is currently there are three types of constraints supported. First of all, we have the access constraint. This means that given two arguments or two entities for which the constraint is specified, it requires that the representative vectors of those entities are collinear. So when those entities are say edges, it's quite self-explanatory. So we are then taking tangents of those edges. If the entities are plain or wires or faces, then we are taking the normals of those objects. The point constraint works in a similar way. We take the representative points of two objects. In case of two vertices, it's pretty obvious what it means. So the two vertices are coincidental. In case of more complex objects, the center of mass is taken. So for line segment, we would take the center of the line segment or files, we would just take the center of the mass of the face. Finally, we have a plain constraint, which is compound constraint consisting of an axis and point constraint. It has been introduced to support the most of an occurring use case of centering two faces with respect to each other and aligning them in the same way. So as I already mentioned, when defining constraint, you have to select entities. You can do this by means of string syntax selectors. As seen here, we can also use tags. So if we have defined our selector earlier, we can reference them when defining constraint. If this is not enough, we can always explicitly specify the category shape object for our constraint. But this allows for a lot of freedom, so we can also add objects from that part of our original mold. We can add, for example, the amy points that are going to be used to define the constraint, define placement of the children of our assembly. Here, to, let's say, be realistic, you can see a complete list of all the constraints necessary to define such a door object. As you can see, things can get pretty lengthy, but that is why we are working on adding more constraints or more higher level constraints that in the future this code will be much shorter. But on the other hand, it's a programmatic tool and we're trying to essentially simulate what a point-and-click cut tool would do. So once we are happy with our result, we can export it to one of the two formats, this step and an internexional format of open cascade. Step being the main target and the resulting step factor will follow exactly the structure of our assembly, so all the parent-child relations will be maintained and also all the colors will be saved. Here, you can see an example of that. So we have defined an assembly in the query editor, saved it to disk and downloaded it to freecat. So obviously the colors are the same, but also the structure of our model is kept. The, let's say, internexional, one of the internexional formats of open cascade is meant for possible integrations of categories of other tools using open cascade. So this is the most, let's say, lossless format for serializing open cascade models. And this allows to model another tool and exactly recreate the geometry and other relationships. It's always nice to show some examples of what other users have been doing with categories, so you can, here you can see some. They're quite firing, but the most relevant ones for our talk are the Island One and the Spindle, I see. The Island One is a good example of an animal assembly. It actually consists of, I think, a list of parts. You can find more details on this link. And this Spindle model is an example of a non-trivial constraint-made assembly, and again, you can find all the codes online. Well, regarding future plans for category, we definitely want to move to open cascades 1.5. As already mentioned, some assembly improvements plan, so adding new constraints, more higher-level constraints, but also working on spinning up of the assembly constraint solver. We will select the sketch class and also sketch constraint solver to enable the users to define to the entities in a more natural way. Also planning to expose the GLTF exporting capabilities of open cascades 1.5 and also allow the users to export category models, which keep data structures. I had to mention that category and category editor wouldn't be possible without the following open source projects, so to make all the contributors out of them, this list is probably not exhaustive, but definitely those projects have been a stepping stone for in category with this now. So this is all I have for today. Thank you very much for your attention, and let's have a short Q&A on the category. The your origins as a free CAD workbench. So I have to say that the development that we were seeing these days in CAD query is really making it a terribly useful tool to a wide swath of developers who might otherwise not be able to integrate it into their workflows. Have you had contact with other teams? I mean, I know that from the key CAD perspective, we're very keen to work more closely with CAD query. We have, but CAD query is kind of the basis of our 3D model library at the moment. Are there other teams you're in contact with that also are kind of using CAD query on a large scale? So well, on a large scale, that depends on what you mean is large, but the other project that is, let's say, based on CAD query, and it's kind of a separate tool, it's a paramack. So it's a parametric tool for defining decision reactor geometries. Don't ask me any questions about it, because I'm not involved there, but I know they're using CAD query and we're quite actively in contact with each other. So that's, and I also know people using CAD query for different applications, but I haven't seen that many tools. So send along tools that use CAD query as a library. It is one of the use cases definitely, but it's not, let's say it's not utilized with full capability yet, or maybe there are some secret projects. I think yet is probably the keyword there. So as more people discover it, I'm sure that that uptake gets larger and larger. So you mentioned on your last slide, your roadmap, your planning on moving from the OpenCascade technology 7.4 to 7.5 at some point, they released 7.5 just this year. Do you see, what issues do you see with that migration? There is actually not that many. So I think they're quite good. So OpenCascade is quite good on backward compatibility. So there are no major role blocks. So as far as we have tested 7.5 works almost out of the box. So there are some changes regarding the text rendering functionality that are in one way or another breaking pro-CAD query, but I think that's something that can be easily fixed. So let's say one of the next, literally next things we're going to work on, because there are so many goodies to use in 7.5. So I'm certain that's coming very soon. Right. So in your talk, you discussed a little bit about your visualization. And I know that the responsibility of that visualization to parametric changes in the model has been one thing you've kind of worked on in the past. How's that coming? How's the responsibility of that, the, I guess, the turnaround time for complex models to render into visualization? So let's say always a charge. So there are no mechanisms in place at the moment that, let's say, do caching or similar things. So that maybe in the future we can think of something, but for now that's not in. We did some improvements on runtime of Boolean operations, which are usually extremely slow. So they're now parallelized using the native polarization of open cascade technology. Well, and the other thing to keep in mind is if you want quick turnaround, you just need to use low, let's say, distillation quality. So just for quick, let's say, preview, don't do fancy meshes. But that's for now what it is. That's good advice. That's good advice for everyone. So Clement was asking a question, is there a reason for the condit dependency or is there the possibility that someone who wanted to use an alternate distribution mechanism, say PIP, would be able to submit that as a PR? So sure, that's something that is coming up very often. So the reason for using conda is that it's saving time for us. So we have all the packages, all the dependencies built. So I don't have to, let's say, worry about building open cascade or I don't have to worry about building something else. And that's not really handled well by PIP. So there is a, let's say technically, I know big projects can be distributed through this channel. So I noticed that VTK has something in place in PIP. So, but to my understanding, there's a lot of work. So hence, let's say, the core team is not focusing on that. But if someone wants to, let's say, contribute in that respect, sure, why not? That's, it's a modern welcome. If it doesn't get in the way. Yeah, well, indeed. But I don't think that it would actually, but it might be quite complicated to my understanding. So it's not like a simple tool without many dependencies. Right. And that makes sense. Are there additional graphical challenges that you're addressing in the architecture of CAD query or do you find a number of projects have brought up that their engine migration is really, we're at this moment in the industry where a lot of people are moving away from OpenGL and to these other standards as kind of their long-term plan. Is that on CAD queries radar as well? So not explicitly, to be honest. So, so as far as I'm concerned, the strategy would be to just use what OpenGascape is providing. And I think they will, sooner or later, let's say catch up with all the new developments, right? So we have this N5, for example, GLTF exports. And I assume, as new standards are invented and introduced, they have to catch up. So let's say. And they talked about that as well in their talk. They're planning on doing some Vulcan work and similar in the future. So you're saying as they bring that in, you're the beneficiary of that as a downstream consumer as well. Yeah, absolutely. So let's say, I think it's really important to prioritize. So you have limited time, right? And you know what you can do and you know what you can outsource. And I'm really, let's say, happy if I can outsource, you know, of course, things to other projects. So for example, right, depends on OpenGascape as much as possible, or depend on great projects like easy DXF or DXF import and so on, right? Let's say developing everything yourself. If you have a really small core team, well, it's not really, it's not very sustainable. And I think there are other people that are much more skilled.
|
CadQuery (CQ) [1] is a Python library for building of parametric 3D models. The overarching design goal is to provide a fluent API that enables the user to express the design in a natural way. CQ is based on the open source CAD kernel from OpenCascade [2] and therefore offers industry standard B-Rep modeling capabilities and allows exporting to STEP and many other formats. With the upcoming 2.1 release [3] there many improvements coming to CQ. I will briefly summarize them but will focus on the new assembly system. The new CQ version allows the user to combine individual CQ objects into an assembly with the possibility of nesting. The individual object positions can be specified manually in terms of constraints that are solved using a numerical solver. Once an assembly is defined and all the positions specified it can be exported to STEP preserving the assembly structure or an internal OpenCascade XML format. In the I will discuss the current assembly system design, capabilities, limitations and possible future development directions.
|
10.5446/52405 (DOI)
|
Hello, my name is Curellario Florin aka Oficinae Robotica in the Forum and I run a small YouTube channel where I try to promote and showcase FreakEd as software and also bring news about the ever increasing number of new features that its developers bring on an ever increasing pace. Together with me is Tsenglei aka Real Thunder in the Forum, one of the developers that is pushing FreakEd to new limits and with whom I had the pleasure to interact a lot in a really constructive and positive manner. In fact, the idea for this talk has come taking a look back at the awesome outcome that this positive user developer interaction can bring in an open source development environment. Hello, Real Thunder, would you care to present yourself? Hi everyone, my name is Tsenglei. It's difficult to pronounce for the Westerners so you just call me by my foreign name. Today I'll be first to introduce my FreakEd link branch. I will talk about the past, the origin of this branch and then to present you the new features and we'll talk a little about the future development plans. This is my name. First the past, I first used FreakEd 0.13 version back in 2014 and I want to build my own device. Back in the day, Google Glass has reached its peak impact but then it went downhill so I think maybe I can do a better job. So as you can see that my device is at the top, it shapes like a headphone and has a headset display. I used the assembly tool workbench to assemble my device. I find it works well for the assembling purpose but then it has some problems when you try to update your design. I first made some contribution to FreakEd about the GUI tree view improvement then did a little something on the path workbench mostly because I want to do the PCB mailing. Then came the idea of improving assembly capability of FreakEd so I first made some discussion in the forum about the ideas of a link which is a easier way to share both the geometry and the visual of the same part. So I went out and implemented the link in the same year because link is a complex feature that involves a lot of changes in the core so it was pretty difficult to get accepted. So I created the assembly tree workbench as a demo and also test as a test of my link feature. I attempted to pull requests both in the 2017 and 2018 but as expected both PR got pending but I continued to work on the assemblies. I have my assembly tree allows to use any geometry for constraining from the auxiliary objects like datrum and sketch. I created the sketch export functionality to allow the user to export the sketch edge including the construction lines but the export is not very stable because of the infamous topological naming problem in FreakEd. And so like the famous quote I came I saw and I conquered it. I implement the topological naming framework. Yeah it's a joyful ride. Later on I refactored the step import and export mostly using my link feature to greatly improve its efficiencies. It has enabled FreakEd to import some large assembly files as previously it's not possible. The next feature to get linkified is the expression together with the spreadsheet workbench. I didn't just add the support of a link into the expression. I have expanded into a full fledged language with a Python like syntax. They came to 2019. My link features finally got merged into the FreakEd 019. It's the third attempt. That's why my branch is called link stage three. Because of the further stage. The third time is the charm. Oh yes. Just briefly why were there so many difficulties in merging the pure request because the changes were really extensive in the core? Yes because the core has to recognize that the link is a special object. It does not have its own geometry data or visual data. It has to follow the link to get actual data. So the changes are extensive and the other developer has to review it. The merging attempt took a lot of effort. My work continues. I didn't stop here because as you know as a normal user I struggled a lot with FreakEd just like everyone else. Then I suddenly become a master sort of. So you know the feeling is feeling very empowering. So yeah I continued. I added the possibility to save the FreakEd document as uncompressed directory with the files that are suitable for external version control software like Git. So maybe the next step will be someone to implement the version control functionality into the FreakEd. Was this a user request or was this one of your needs? Yes I think both because we all know that for CAD projects especially there's simply involves a lot of files. So you have to have some way to manage the different versions. Yeah I do use Git before even I started to contribute with FreakEd. But then the FreakEd file format is binary. It's not really friendly for the version control. So this is a direction for future development. So this could be also really useful for people that work in teams? Yes simply. I then spend a lot of time to improve the 3D selection and rendering work which I will be showing you in the later videos. I also started to improve the part design work branch focusing on a better support of multiple solid. Now it's the present. It's mostly my 2020 work. I started to put more time to improve the FreakEd user interface. The first one I tackled is the overlay user interface. I've tried the glass add-on that's developed by Trip Plus, another FreakEd contributor. And I liked it a lot. So I went on to add this type of interface into the FreakEd. I think it was around this time that we started talking in the forum if I'm not mistaken. And one of my first requests as a simple user was the outline of the text in the review. Yes, yes indeed. Actually I saw your video about configuring Trip Plus glass workbench and then I started to implement this. And because it's a user interface it really requires the interaction with the user to better develop the feature because it's for the user directly. Yeah and what I can say here is that it was really nice collaborating with you about this because you were... how can I say it? You were really collaborative and you listened not only to my request but I think there was some other user that interacted with you in that period. What the overlay user interface does is it can display those view panels on top of the 3D view and you can drag it around to resize it. Yeah and as you can see in the video that the overlay is truly transparent close to the visual and the mouse clicking. And you also have those auto hiding of the panels. What the overlay interface does for the simple user is really extending the canvas, the 3D view offering... giving the impression of a really wider work area although the physical space occupied by the various interface elements is always the same as previous. Yeah but because of the auto hiding and the click through the workspace is actually extended a bit yet. Next I worked on the pie menu which is another add-on provided by Trip Plus. What the pie menu provided is not only the visual improvement of the menus it's actually the most important pie for the user to create its own customized menus and they can be brought out by keyboard shortcut. It was always nice interacting with RealTunder because I remember when he first introduced the pies I asked for if I'm not mistaken for way of adding workbenches as shortcuts to pie and other than adding that possibility he added on top the animations. Every time he added one feature on top of the requested one. Yeah because I normally took the user advice I also got inspiration from one of those advices. One feature that not a lot of users know about is the quick search. Oh yeah what the video also shows is that because it's an aid for the user to create its own menu because it has to know what options are what commands are so you can simply type in the command titles and it will show you the various commands. When we were talking previously about merging the various features that you coded for FreeCAD you said that essentially the overlay user interface and pie menu are self-contained as code bases so this should be easier in a way to do the merge. Yes I like the link feature and also the topological naming which the code is spreaded everywhere the the GUI features mostly concentrated on a few plays. Next is the 3D rendering selection. What I implemented is a feature called selection on top. It means that when you select an object or be displayed on top of the others it will be shown transparent. You can easily select the hidden edges by just clicking and also the hidden faces using the mouse wheel. Together with the pie menu you can also select the higher level of geometries like wires, solid, and come off. Yeah you have introduced the UU shortcut menu if I remember correctly and the GG shortcut for selecting the hierarchy of the model also. Yes but you used the hierarchy select. Yeah sorry. It means geometry. I used them all the time. It's a muscle memory. I didn't remember the letter. Yes. Next is the shadow feature. I searched around the shadow rendering is quite common in those commercial CAD software. I saw that that's pretty cool and I'd like to have it in free CAD so I searched around the forum. I saw the post made by URIK and he has already made a somewhat working solution using the existing functionality provided by the Coins 3D rendering library of free CAD but it has some problems so I dig into the source code and saw that the shadow function of Coins 3D is only half finished. So again I saw it, I came, I made it come somewhat complete and as you can see in this video it's a simple scene but it shows pretty much all the functionality of the shadow rendering that I brought to my branch. You can see all those shadows of the transparent object also the opaque one. This is the spotlight, previously it's the directional light. You can move the lights around. From a user perspective shadow is really important because it gives a sense of depth when dealing with complex models or assemblies and to be honest since I started using it because we still don't have some sort of ambient occlusion in free CAD. The shadow is the next big thing and what I like about it is that it seems that performance of the 3D view isn't really that affected by adding shadow to a normal scene. This is actually a feature I developed early right at the beginning of 2020 but then I want to show you the feature with those three enhancement that's why I put it here. What the variant link means is that the user can create a link or a binder as shown in the video and then change the configuration of the object to have a different shape than the original object and the user can create those configurations using spreadsheet. As you can see in the video you create a binder with this hexagon then you can change it into a pentagon. It's actually built on top of the expression and spreadsheet so anything that can be parametricized by the expression or spreadsheet can be used in a different configuration. Next is the new features, new improvements I made for the part design. Most of those features are built on top of my topological naming framework. The first is the suppress. You can suppress any individual features inside of a body and thanks to the topological naming the new naming or the later features won't be affected and you will get the correct result. Next is the preview feature. The upstream freecat already have some preview mode for the primitive shapes like cube or cylinder. Extended to include more features like you've seen in the videos, the fillet you can easily select the fillet and the preview will show you the result. We should talk about this also. You usually sync your branch with freecat master roughly once a month. Here we are at 2021 and welcome to the future. The most important task of 2021 at least to me is to merge some of my features into the freecat 0.20 version. The first of course will be to merge the topological naming and then the free features like overlay and pie menu which is expected to be easier because it's self-contained and then followed by the part design improvements which come naturally after the topological naming framework is in place. Then there will be the 3D rendering and selection. It's also a popular design especially the 3D selection. The next is the expression. I expected to need a further discussion because of potential security concerns like you had to have some access control on how the code is run when you open the document. Finally about the shadow because most of my code of the shadow is inside the coin 3D library. It's not inside the freecat. The merge of this functionality will be talked elsewhere. Also regarding the rendering I have other big plans. I have already done some preliminary work to prepare for a major upgrade of freecat 3D renderer. I'll be still using the coin 3D library for graphical scene building and ensure the back work compatibility of the existing code. But I will be replacing the rendering part, the graphical rendering part of the 3D library using some other lower level but more than graphics libraries. Basically it's not about whether we should use an external library or not because freecat is a synergy of various external libraries. The problem is that the choice of the library, the coin 3D has unfortunately stopped developing at the dawn of modern graphics area. So it's when the modern graphics is progressing heavily the coin 3D library stopped developing. Actually it's right where they half finished the shadow feature then for some reason the developing stopped. So we have to in the future sooner or later choose another rendering library. So I thought that I studied the code of the coin 3D and found out that it's seen the graphical scene related code is works quite well. It's still relevant today but I can actually only upgrade its rendering part. So that's my choice. Instead of using a completely new library which involves extensive change of existing code. Yeah those are the plans. Upgrade will enable more advanced rendering using the modern graphics card like instance rendering which is a more efficient way of rendering basically the same geometry at different places. And also the hidden line view for example which will be benefit to work benches like tech draw. What the hidden line view means is showing this image the pipe the outline of the pipe does not actually physically exist. What the tech draw is doing now is to use the opencast cape the projection API to generate those those outlines geometrically. It is slow and sometimes it will fail. What the new render will do is to generate those outlines in real time when it is rendering so you can freely review and still see the outline. Another example will be the section view as you can see in the image those hatches can be generated in real time also and they will certainly benefit the tech draw. The section hatch is also beneficial when working with assemblies to visually check for collisions inside the model. Also some other features to improve large scale assembly rendering like the level of details that the renderer can choose a different resolution of parts to render the parts depending on their distance. And also hardware occlusion to hide the to skip the parts that are occluded or too far away. All those features are talked required the shader based rendering so once we have established a framework for the shaders some other better visual enhancement will come naturally like the better shader ambient occlusion we talked about and also a more realistic looking materials and proper texture mapping as such. The lower level of modern graphics library I mentioned there are actually several choice the first choice is a library called the Dillusion Graphics it's a relatively young library but it has quite a it follows the development of modern graphics card quite closely like just a few days now they release the api for the ray tracing. Oh that's interesting yes they they have support of different graphics interface on different platforms like Vulkan, Linux and Windows will be the RX 11 or 12 and then macOS although it's not directly supporting its metal interface but there will be a bridge interface called molten vk yeah the Dillusion will be using that for supporting metal. Yeah the other alternative is BGFX, BGFX I don't know how it's pronounced yet similar to Dillusion graphics but relatively more mature you can say and has a larger community rich tutorials and examples we'll see which one is more suitable for using free cap. Yeah how I can see it from your point of view BGFX perhaps has more documentation it makes your life easier but it has a little less how do we call it features respect to Dillusion graphics. Yes you can do that and also I search around it seems that the BGFX has some difficulties in use in QTE the the free framework that is in free cap. This was a discussion about the past and present of the Linkstage free brain development branch and most of all a discussion about the possible exciting future of free CAD. Our talk was a lot longer and we talked about a ton of other stuff that we didn't actually manage to shrink into the available time. I stepped out of the way during the talk giving gold space to real thunder as he is the star of the show this is actually a prime example of the positive interaction between user and developer and this kind of synergy has given fruits during all the time that we collaborated and as a personal advice I'll show you here a framework for user developer interaction source of course Twitter and always remember that developers tend to function better when they receive cookies. What I have let out from the presentation of course many of the new part design tools like split extrude generic pattern wrap feature and others but don't worry the full talk will be released on my youtube channel so everyone can enjoy the full presentation of features from real thunder. As a closing word I would like to say that the preferred channels of interaction are the github wiki for bug reporting but for future requests or discussions real thunder prefers the free cat forum as it is a more suitable place for mind storming out our feedback. Thank you all for your attention and happy freekilling!
|
A discussion about the positive user developer interaction in an open source development environment. -presentation of the LinkStage3 dev. branch of freecad -short summary of differences between LinkStage3 and master -short presentation of the most exciting new features introduced in this branch -how the future might look for FreeCad and how to make that future a reality as far as merging those features in master
|
10.5446/52412 (DOI)
|
Thanks for being there. So what is finite elements? It's the thing when you have your CAD file and you want to see the mechanical deflection of whatever CAD thing you design or a motor you just designed. You want to see the torque that is created by this motor or a pipe where you want to see the water throughput in it. You want to simulate this with finite elements. And this is basically if you have access to a library you could just connect it to whatever CAD software and it would compute the physics and output data that you could again show in the CAD environment. And so it would solve physics problem and predict what's going to happen like motor torque for example. And this is exactly what Sparse Lizard does. So some history it's quite a young finite element library in terms of a usual finite elements library are like 20, 30 years old commercial software. Ansys is probably 40 or more years old and so they have a lot of history and it's good or bad depends. But Sparse Lizard is quite young which doesn't make it not robust or not mature. So it all started at University of Leage not so far from here. During my PhD I wrote a MATLAB finite element code which had already a lot of all the features that Sparse Lizard now has. And this was like you can see it as a draft. And from this on 2017 I rewrote everything from scratch in C++. And so everything is really nicely integrated together. It's not something that was doing just a few things and then it was patched and patched and hacked around to add things more and more over time. It's really a lot of features that are there from the very beginning and which make it actually quite nice and monolithic and where there is no need to hack code around. And then from 2018 to 2019 I worked at IMIC in the Anno Electronics Research Center. Some colleagues are here and I used it basically. I used the software to design micro-electromechanical systems quite a wide range of them and they were fabricated so there is actually some industrial background in the way it was written because it was written on the side of this work. And then for the coming four years there is a grant thanks to the Academy of Finland and thanks for Academy of Finland bringing me here. It will be developed for four years at Thampere University with a slight different focus on particle accelerator magnet design and collaboration with CERN. So four years of basically full-time development already paid and guaranteed. So there are lots of finite element softwares and you could think of why there would be another one to add. From my point of view there are lots of things that just are specific to every software and everything but I didn't find what I expected in finite element softwares because they are always missing something and they tend to be usually not easy to use. Now here we provide a very large set of proven capabilities of a lot of different physics which you can very easily combine because this is the purpose of it from the very beginning to work with highly multi-physics simulations. You can also have a lot of extra finite elements, things like mortar finite elements which works very nicely for electric motors and all that is very concise and user-friendly. We'll see that even though that it's a C++ library we'll see that again later and it's carefully validated and debugged. I spend a lot of time on validating and so far as far as I know there is no bug that I'm aware of that is still there. If you want to find some it doesn't mean there are no bugs but if I know of a bug I will remove it. It's also clearly documented and quite efficient. You can run it on 32 cores and get a nice speedup and it's very rapidly expanding. All the examples that I will show half of them have been added last year. So let's first start about what it's able to do. So this is half of the examples so you have fluids, magnetics, electricity, mechanics, rotating machines, acoustics, thermal, simulation. They all have demonstrated example online and I'm not hiding anything. You can just click on this button and you will see the example and it will always fit in 10, 20 or at most 50 lines of code which are actually extremely readable. You can simulate highly multi-phasing things like the thermal acoustic simulation in a deformable cavity. This includes pressure thermal, pressure thermal mechanics and acoustics. All that combined in one simulation or the fluid couple pyazoo-actuated MEMS. This was actually fabricated at IMIC and this includes pressure, pyazoo and mechanics. All this combined very nicely. So all these examples are validated and there are some more. You have additional features like on the top left where you can work with non-matching meshes just very easily. And what you can simulate is of course transient simulations, harmonic simulations, eigenmodes but also something specific to sparse lizard. You can simulate harmonic and harmonic domain things that are non-linear which commercial softwares cannot do at all as far as I know. So if you have a non-linear problem you want to know how all the harmonics will appear. This is very, very straightforward in sparse lizard because it was really at the core of the initial MATLAB, a FIM code that I started with. There are lots of predefined physics as well. For example on the bottom right, advection diffusion is very well known that if you have advection dominated advection diffusion problems you start having instabilities and for that there are five different schemes of stabilization that are predefined and checked that you can readily use in just one line. Now advanced things that are available. So as I said there is native support of the so-called harmonic balance finite element which allows to do non-linear harmonic analysis. There is also a fast 3D very general unstructured mesh-to-mesh interpolation algorithm. It scales very nicely linearly to up to 100 millionths of elements. You have general 3D mortar finite elements. So on the previous slides here you have an example of an electric motor and the rotor in state you really want to link them with mortar. This is how it's done commercially because otherwise you don't have the freedom to choose the mesh at the interface. You have some constraints. Here you are totally free to do it. It works in 3D. No limitation. You have since, well basically I started writing this a month ago and it will be available next week through P-adaptivity so you can change the interpolation order on every element in the mesh which means only on the elements that actually require to have more computation done will you perform more computation. So as an example I have a short video that's not going to show there. So you have the electric motor simulation and so you have the induction field, magnetic induction field on the left and then on the right you have the interpolation order that is the best to actually solve this as accurately as possible with as few degrees of freedom as possible and you see red is the place where you have the highest interpolation order which is 4 here and this corresponds to the flux concentrations which actually need to be accurately solved for. And so as you rotate the rotor it will automatically adapt. This is just two lines of code to change. There is really nothing difficult to that. But probably in other softwares you might get in trouble if you actually want to use it. Now you also have extra things that you expect to have in finite element codes. Maybe a file format which in this case happens to be compared to VTK Paraview format. If you run a fluid flow simulation in time you need to store 500 time steps. You might need 300 gigabytes of data. Here you will just need 30 gigabytes of data. It's like 10 times more compact than VTK for example and I don't believe it's possible to make it more compact because it just stores raw data and you can just easily reload it and that's the nice thing about it. It's not just dumping data and loading again. You can easily reload it later even if you have no idea of how the simulation was done. You have one line of probes so if you want to know the value and one specific position you have the interpolate function, you have maxes, averages, integrals, whatever this is very straightforward to use. You have Paraview Output format because Paraview to me is the best way to visualize simulation data and then you have Gmesh and Nastron mesh input format and lots of more input formats via Petsy that allows to load other mesh format and Gmesh as well which you probably know a lot. You can also have curved meshes. So quite a lot of extra features and it's very growing so probably you will see some mesh refinements coming in the next month because this is currently what I need for the superconducting magnet simulations. Now it's con size and user friendly. You don't need advanced knowledge of C++. It is C++ so you can easily link it but all the pointers and stuff are hidden. You don't have to work with a memory. There's no hack. It's highly readable. So as an example, you don't need to know the equations but if you want to run a 3D electrostatic simulation, this is what you would need. This is a working example. Nine lines of code. Two-third of it which is just comments. Hard to beat I think. It's object oriented programming. Yeah, just have a look at the examples online. Basically they might be just three times longer but with 20 lines of code you can run a full 3D fluid flow simulation for example. Now it's documented and not just automatically documented. It's really I spent a lot of time to document it. So every function you're supposed to use comes with as detailed description as possible and also a working example like this one where you can just copy paste it and then work or play around with a function to see what it actually does and what the specific things are about it. This is valid for every function and whenever I add a function I add it immediately to the documentation and it's available on GitHub for free. It's JPL, open source of course and if there's anyone who develops a CAD engine, I would be definitely happy to have some interaction to include it. Thanks. So questions, please. Can you also do FEM on 2D or 1D differential equations? Can I do finite elements on 2D or 1D equations? Yeah, so 3D, 2D axisymmetric, 2D and 1D. Yeah. Yeah. I think I'm going to use to explain the other mistake but I'm doing thermal analysis. It's quite easy to get the heat and that's way outwards from some of these tools. Yeah. Where you actually want to use an area, summation. Integral. An integral. Can you do it? Oh, you want to have like an average value for a thermal problem. You want to know the average temperature or something. If you're trying to measure heat flow, you can get the queen of heat flow because that's something unstable you want to say. I agree. So if you want to integrate the heat flow through a surface, it's one line. So you have access to the normal. You multiply by the heat flow. You dot integrate. You see on which region. What integration order and you're done. You have a double value out. So topology, I don't myself but I do have support topology optimization. I don't myself but a former colleague managed to do top mechanical topology optimization. There is an example online but no example button to click on because it's it's his software. But yeah, you definitely can because he did to the bridge topology optimization in mechanics. Yeah, he did it so I can confirm it's possible. I can. So if for the what I heard, am I limited to like conduction or and can I do other things of thermal analysis like other problems like convection, radiation and heat. There's an example online for what conduction you can also have. There is also an example for natural convection. So that works as well quite easily, especially now with the stabilization methods added and radiation. I haven't tried it. So of course, you can probably find out the equations that correspond to how much you lose but you cannot take into account the fact that you radiate on another phase for the moment, although maybe that particle tracing is being added. Maybe it can somehow do that but radiation would be the only thing that that's missing. Yeah. Yes. So you're interested in including turbulence and the and the simulation for the thermal convection. General fluid. So I actually that's funny because one one month ago I thought what am I going to do next and I thought I'm going to add a turbulence model for fluid flow model with a Spalart almorous and everything is ready to to add it. It's just that I thought it's too specific. I don't want to write something that is just for comparison. In compressible fluid in this specific case. I think for the moment the user will have to write it in himself but Spalart almorous at least is it's easy to add and the only thing you need in this case is to know the distance to the wall and this if you need help for that I know how to do. Yes. If you want to add this different set of shape functions yourself. I think it's quite easy so you have like a folder where all the h girl shape functions all the h1 shape functions you could just create another one and it also it also it this all is quite readable because it's called a polynomial function where you can just make products of polynomes which you first define and based on what is already there I think it's really doable for a user. It's generated what I didn't fully understand is my stiffness matrix explicitly or build up with the way it's build I call a function that that creates a a math objects which includes all the terms of the stiffness matrix and this is created in basically the core of sparse lizard a function that calls everything that needs to be called and just assembled very efficiently all the terms in the stiffness equation. What? Yeah, yeah you can I tried with up to 50 million degrees of freedom in 2D and 5 million degrees of freedom for fluid flow in 3D of course on bigger machines but not like super computers but more 32 core machine with a 700 gigabytes of RAM and yeah definitely this this is doable. The only limit for now is that I call PETC to call mumps because mumps is efficient and solving algebraic problem and for now PETC doesn't call the new version of mumps and so it's limited to a number of non-zeros in the matrix that is less than about one or two billion which limits 2D problems to 50 million unknowns. So what we find this is not in the documentation that it will be released on Wednesday but it will be up to the user to choose. So for example you can because that gives you the most flexible thing for example you will be able to in the motor example you have a vector potential and then you could just say that you check on the norm of the gradient of the Z component of the vector potential basically to see where things are sharper and where it's sharper you have like corners and stuff in there and that's where you want to refine more so but it's up to you to choose with whatever expression because you can build any expression easily in sparse lizard to choose it. Thank you.
|
Presentation of the new features in sparselizard 202012. They include adaptive mesh refinement, interpolation order adaptivity (hpFEM), time-adaptivity, speedups, syntax optimization, link to gmsh, move to cmake and a large number of added functions.
|
10.5446/51979 (DOI)
|
There's for the opportunity to speak to you today. My main goal in this presentation is to report on a digital visualization project, mapping ancient texts, or MAT for short. MAT's various outcomes include visualizations for use in research, as well as in class projects and independent work conducted by undergraduates. In my talk today, I will focus on the latter two pedagogical outcomes. All the visualizations that I'll discuss are available on the project website, mapping ancient texts dot net. MAT receives support from Kenning College's Center for Innovative Pedagogy in the Department of Classics, a grant from the Great Lakes College Association providing critical support in the early stages of the project. Other current and former members of the team are Joe Murphy from the Center for Innovative Pedagogy and undergraduate researchers, Natalie Ayres, Haley Gabrielle, and Daniel Oliveire. My interest in digital maps and other sorts of visualizations stems from my non-digital research into the rhetoric and representation of travel and geographical space in classical literature. Roman imperialism both made possible and depended upon networks of trade, travel, and transport that traversed the Mediterranean basin and beyond. A major vein of my research explores some of the ways that Roman poetry reflects and responds to Roman mobilities that span from the Atlantic to the Indian subcontinent. Research of this sort, as well as MAT, is inspired more broadly by the so-called spatial turn and mobility turn that have occurred across cultural studies. The spatial turn approaches space not as an objective inert dimension in which activity occurs but as socially and politically constructed and in turn a factor in the construction of individuals, societies, and politics. The mobility turn emphasizes the movement of humans and things through the places and spaces that the spatial turn brings to the forefront. Along with the digital turn, it is this emphasis on spatiality and mobility that provides the theoretical underpinnings for MAT. These are exciting times to be asked and questioned about ancient Mediterranean geography and travel. The publication in 2000 of the Barrington Atlas brought a new level of precision to our understanding of where things were in the ancient world. The Barrington Atlas in turn provided a foundation for a variety of digital projects that complement and build upon it. Projects that provide both data and conceptual inspiration for MAT. I've listed some of my favorites here on this slide. Within this context, our project has the goal of creating queryable geospatial interfaces capable of visualizing multiple travel narratives simultaneously. And I emphasize the travel in travel narrative. One thing that distinguishes this project is that it attempts to visualize movement between places. Our first visualization for MAT charted all references in classical literature to the port of Cassiopeia on the Ionian island of Corsaira. The visualization allows the user to view and read multiple travel narratives mapped onto the places that the narratives describe. Users can also filter them based on criteria such as language, author, or as seen in this screenshot genre. The Cassiopeia visualization provided a proof of concept using a limited data set. It also helped support my contention that a reference to Cassiopeia in Perperatus 117 ought to be read as a reference to the port, contrary to how some critics have interpreted it. My presentation today, however, focuses on either Cassiopeia nor Perperatus. Instead it looks at pedagogical applications of the sort of digital cartography that we're doing. I shall focus primarily on the results from a Kenyon College course in which students use the web application CARDO to create visualizations from geospatial information in Cicero's letter. In the final part of the talk, I look at a student researcher's development of a digital visualization of Hannibal's movements during the Second Punic War. This paper explores how these projects can teach important technical skills as part of a classic's course and engage students in detailed analysis of Roman mobility and history. I also discuss some challenges we faced in using evolving technologies in the undergraduate classroom. The Mapping Cicero's Letters Project approaches Cicero's epistolary correspondences as travel texts insofar as they move or purport to move from one author to an address C and frequently make reference to the journeys that Cicero and his correspondents undertook. The data for Mapping Cicero's Letters was created by Kenyon students enrolled in a course on ancient travel, geography, and ethnography in 2016, 2018, and 2020. The project is co-taught by Joe Murphy and me. This was the only digital project that the students undertook in the course, although they were introduced to some other digital resources over the course of the term. The course had no prerequisites, and all readings for the project and for other assignments were in English. Students reigned from first years who had never taken a classic's course to senior classics majors. 57 students have taken part in the project over these three iterations. The end goal of Mapping Cicero's Letters was to have students working groups of two create original tabular data sets and digital visualizations based on the geospatial information in epistles from the corpus of Cicero's Letters and to contribute to a collective data set. The project's learning aims were increased facility with tabular data, data visualization, ancient Mediterranean geography and travel, and late Roman Republican history. In 2016 and 2018, for this project, we moved from a normal classroom to a computer lab where we conducted a mix of instructional tutorials to teach the skills required to complete the project and in class independent project time where students worked on data creation and visualization while we were available to answer questions. In 2020, we used a mix of remote synchronous and asynchronous instruction as well as independent work time and drop in virtual office hours. To complete the project, students learned and made use of a variety of skills ranging from traditional close reading of text to the database management language sequel. At the beginning of the project, students were paired off into groups of two. Next they were introduced to CARDO, a GIS based web mapping application. CARDO has a fairly easy learning curve for simple visualizations and it offers a variety of useful tools out of the box. To our hands on activity where the students mapped their own journeys from home to college, my colleague Joe Murphy introduced the students to the concept of map layers, for example a points layer and a lines layer. In this case, they created two points, one representing where they considered home and one for the location of Kenya. They then created a line where we term a journey segment linking the two in order to represent their journey from home to Kenya. They also learned about the relationship between the layer and its underlying data sets and how points, lines and polygons function on a map in CARDO. Students were also introduced to how to use CARDO's CSS interface to customize the appearance of their maps. And so we can see on this slide Joe who's originally from the DC area and is linking through his two points to Kenya. Students having learned about CISRO earlier in the course were introduced to his letters and each group of two was assigned a set of 20 for their visualizations. Groups then read their assigned letters identifying information about the location of author and address C. They also researched historical context making use of secondary sources including Shackleton Daily's commentaries. In addition, they were asked to make note of any references to travel within their letters. Once they'd analyzed their letters, the students began the process of transforming the information into tabular data in Excel following templates that we provided. You get a sense of the templates here and note that we make use of Pleiades URI as well as their coordinates to locate every point. The resulting visualizations represent a letter as two points, one for the author, one for the recipient. As in their introductory CARTO exercise, these two points were linked by a journey segment line, in this case representing the journey of the letter from author to recipient. And that's where you can see here, this is a completed map, but two points linked by a journey segment. Lines and points are treated as different layers in CARTO. So students were required to make two different data files as you can see on this slide. Students next imported their tabular data into CARTO, where the data required customization and manipulation for the application of two SQL statements. In the students' data, every reference to, for example, Rome has the exact same coordinates because they pulled it from Pleiades. CARTO renders multiple points with the same coordinates as if it were a single point. To address this issue, students learn to apply two SQL statements, one that scatters identical points so that they no longer overlap, but continue to be in the general vicinity of the location to which they refer. That's the process that's represented on this slide. Applying a second SQL statement linked scattered points with their corresponding journey segments. Applying the second SQL statement also required the students to create two additional tabular data files. Students were able, in addition to use CARTO's CSS interface to customize the appearance of their visualizations further. At the conclusion of the project, each student presented their visualization in class. As part of the presentation, they were also required to have researched three places mentioned in the letters and to offer an in-depth analysis of one of their assigned letters. Groups also contributed their data to a combined data set, which allows us to create visualizations of the entire class's data. As you see on this slide, in example A on the left, we formatted the data in the same manner as the student's visualizations. Example B makes use of CARTO's cluster features to show the frequency that places are mentioned. Example C uses CARTO's torque feature, where the points appear and disappear to reflect the chronology in which the letters were written. In both 2016 and 2018, all groups succeeded in creating visualizations, with quality ranging from exemplary to marginal. In 2020, each member of a group was responsible for creating their own visualization of half of the assigned letters, because they were working remotely. One student did not complete the project. On the exemplary end, some students discovered additional features on their own, for instance, not a labeled journey segment. On the other side of the spectrum, some maps had obvious errors, such as points that were geo-referenced incorrectly to places outside the Roman world. Nonetheless, all the students, save for one, did create visualizations, which was a minimum threshold for passing the assignment. Moreover, all students overcame various obstacles along the way to get their data visualized, demonstrating resiliency and problem solving. In this section of the paper, I'll discuss some of the factors that we suspect affected student performance. Carrying out this project taught us about the challenges and benefits of conducting digital projects in a college classroom. I hope that these reflections are applicable both to our specific project and to the conduct of digital cartography projects in pedagogical contexts, more broadly, in classics and beyond. First the platform. We chose Carto while the project team attended the Institute for Liberal Arts Digital Scholarship in 2015. This digital humanities workshop allowed us to make rapid prototypes, comparing options including Carto, Omica with Neatline Plugin, S-Ree Story Maps, and ArcGIS. In the time that we had the most success working with Carto, we chose that platform. The benefits that Carto offers include free accounts and academic accounts that have upgraded features. Pedagogical purposes of web-based system like Carto also has advantages over locally installed software like ArcGIS in that students are more able to work on their own computers or in computer labs on campus. That said, we have not reconsidered the costs and benefits of the platform or its competitors since 2015 and we're now planning to undertake a re-a-prasal in light of changes in the software market, other mapping projects, and our own growing knowledge of project goals. Relying on Carto as with any third-party hosted software solution requires flexibility. For example, Carto released a significant redesign in the spring of 2018 just prior to that iteration of the project. This redesign forced us to rewrite almost all of our documentation on short notice. In addition, Carto, like many apps, has bugs that occasionally present issues. A few student groups dealt with bugs during the project. Our experience with Carto redesign and its bugs speaks to a fundamental challenge in digital classics, I believe. We work in a discipline where change has typically been slow and deliberate. Digital platforms evolve quickly and are often released in beta versions. Making abuse of them may require dexterity and adaptability. Another related issue was the challenge that arose when students attempted to recreate workflows that functioned successfully for us on a handful of computers in our research group on over 20 different machines. We anticipated that this could be an issue, so we attempted to create a consistent computing environment by giving students access to known machines in a computer lab. Students nonetheless ran into a variety of issues. Some arose from them choosing to use their own computer or using Google Sheets rather than Excel. Other issues stemmed from corrupted files and maps. Even small things like SQL reading quotation marks on Apple computers differently than on Windows-based PCs became larger issues at the class scale than they were at the research group level. In some cases, students actually discovered methods that were successful when they did things on their own. Our workflow review did not notice new integrations between Carto and Dropbox and Office 360, enterprising students discovered them, which proved to be quite elegant solutions. For example, the Dropbox integration allowed students to edit Excel sheets online instead of making local copies. And the Carto Dropbox integration permitted students to link their accounts to Dropbox files instead of uploading them separately. There were instances, however, where students using this integration ran into trouble and we were at a disadvantage trying to provide support. Through the use of Google Sheets, nor of the Dropbox Office 360 integration caused big issues, but they did create situations where the directions given in the handouts didn't precisely apply and when troubleshooting was required, we had to play catch-up. The workflow that we provided students to guide them through creating their visualization drew heavily on internal documentation produced in our research group. At points, we identified expert blind spots where the documentation and SQL code examples assumed higher levels of knowledge about both the classical world and Carto than was appropriate. We identified some of these while producing the handouts, but others were only recognized when students interpreted the documentation in creative but incorrect ways. It's clear that our documentation should be streamlined to reduce opportunities for unproductive error. This is perhaps one of the common pitfalls of translating workflows from a small research group to the classroom environment. As mentioned above, once the students upload their data into Carto, they require to transform it with SQL statements that scatter the points so that all the references to a given location are visible and create lines that connect the points. We give students template code and instructions to change particular variables in order to accomplish this task. Our goals are quite modest. We simply wish students to get a little experience tweaking code to make specific data sets work. In future iterations of the project, we'd like to explain a little more of the grammar of SQL to give students a better sense of how the code functions. Equipping them with better understanding would help them troubleshoot their own code. Indeed, general troubleshooting in SQL and beyond and productive error reporting are technical skills we're considering teaching more intentionally next time in this project. The team aspect of the project also presents in particular ways in relation to the technical skills required. In early stages, as students researched and analyzed this for us letters, working in teams of two functions well. Every student had a partner on both research and technical questions and we observed good collaborative activity during class sessions. Once the data sets were uploaded into Carto, however, we observed many teams working with one person's hand on the keyboard and the other team member watching. Many times this person was actively engaged with error checking and problem solving. Other times they seemed less involved. As the students give a joint presentation as the culmination of the project, the watcher cannot be entirely disengaged. We do nonetheless have concerns about whether the learning experiences are equivalent. Free rider problems are endemic to group work, although the individualized account structure of Carto may make them more likely in this project. As students worked remotely in spring 2020, we changed the assignment so that each student within the group was responsible for 10 letters, although they were also required to work with their partner and to check each other's data. This change held each individual student to a higher minimum standard, but also appeared to weaken some aspects of the group structure of the project. I've already mentioned a few of the changes that we made while conducting the project remotely this past term. To research project, to teach project skills, we recorded several short videos and screencasts as well. These videos were features that we previously considered to supplement the workflows that we provided. And while it's impossible to separate these new methods from the pandemic context in which they were taught, we can report that they did not lead to better outcomes last term. As might be expected, the project was not as successful in 2020 as it was in previous generations. Indeed I found that out of everything I taught last term, this project was the most difficult to translate into a remote context, despite the fact that it's a digital project. Nonetheless, overall, we've deemed this project a success and we plan to continue it with a new set of Cicero's letters when the course is next taught. Students were introduced to aspects of the classical world, from Cicero to letter writing, to Mediterranean geography, and a new way. In addition, they either improved upon existing technical skills or more often developed new ones. In addition to bringing digital visualizations into the classroom, MAT has also spawned Mentored Undergraduate Research, which I'll discuss very briefly as my final topic. Daniel Oliveire won an internal Kenning College grant to create his own related project in summer 2018. Daniel is a geo-referenced narrative visualization about Hannibal in the Second Punic War. The project stems from Daniel's interest in data visualization and in making elements of classical antiquity available in novel ways to a broad audience, including other undergraduates, the general public, and especially high school and middle school students. Daniel made use of carto, leaflet, JSON objects, and animated gifts to add new features to his visualizations. These features include a polygon layer meant to approximate the extent of Roman and Carthaginian territory, progress arrows that allow the user to move the visualization from one geo-referenced episode to the next, info boxes summarizing episodes of the narrative geo-referenced to the location where they occur, with links to glossary entries and to digital versions of primary sources, for instance, Polybius in this slide. He also made a glossary featuring important figures, groups, and battles, and finally gifts that animate hypothetical reconstructions of major battles, adding another level of visualization. These gifts are enterprising even if they bring us further into the realm of approximated reconstruction. This student is a non-specialist, not even a classics major, inspired by interest in digital visualizations in the ancient world, and excited about the possibility of inspiring other students about ancient history. To conclude, in this project, as well as in the Mapping Cicero's Letters project, our goals are the introduction and improvement of fundamental skills, some traditional, closed reading, interpretation and analysis, some newer, working with tabular data, digital mapping, CSS, and SQL, with the intention that these might become foundations for more complex research in the future. Yet putting aside the technological details that I've dwelled upon today, projects like these require students to spend a great deal of time thinking about and thinking with the geography of the ancient Mediterranean, and so they offer a particularly active way to promote student learning about the spaces in which our discipline works. Thank you.
|
Video held at the online-conference "Teaching Classics in the Digital Age" 15 June 2020.
|
10.5446/52333 (DOI)
|
Hi, we are back and we will have a talk by Manuel Mannhardt. It's called Nazis in Games. The talk will be in English, aber es wird auch eine deutsche Übersetzung geben, keine Sorge. Es wird eine Q&A sein, aber als wir zwei Channels streamen, wird die Q&A nur auf der Restreality Channel sein. Schauen Sie sich an und dann bleiben wir uns nachher für die Q&A. Die Fragen können Sie natürlich auch auf Twitter mit den Hashtags rc3-1 oder rc3-res-realität und auf der IRC. Hey, everybody. Thank you very much for your interest in my talk about Nazis in Games. Nazis in Games in the first place simply means how Nazis are presented in games. But after that, I will go on about how games are useful for Nazis or fascist ideas in general. And what should be changed about that. Let's start with the depiction of Nazis in Games with a historical setting. World War II is a popular setting for games. Not for all genre of course, but in the first place strategy and action games. Such games usually focus on player's kill and challenging gameplay rather than ethical questions. They focus on combat situations against the dangerous opponent, killing or being killed. Not so much on the slaughter of civilians, not only because it would be cruel, but also because it wouldn't be a challenge, it wouldn't be gameplay. However, that easily makes Nazi soldiers look better than they were. Like just one side of a conflict, maybe with different tanks and uniforms. But the gameplay is mostly the same when you compare Nazis and Allies. The Holocaust or Shoah frequently has little to no place in World War II Great Games. Not only for moral reasons, but also because it does not easily translate into gameplay. And after all, most games want to let you overcome obstacles and be good about your achievements. That's not easy with a topic like the Holocaust, no matter which side you are on. Also, developers might want to avoid controversies about such a difficult topic, so they simply stay away from it. But omitting the Holocaust is not a good idea either. You could interpret it like implicitly denying it. You might say, okay, but it's a game after all. Everybody knows a game is not a history book. Indeed, pretty much everybody is aware you should not just believe what games tell you. The thing is that awareness is not enough to protect us from mixing up the history we see in games and the history we know from more trusted sources. Daniel Jürgen examined this issue in his dissertation. He did not work on World War II games, but showed, for example, that people who played Assassin's Creed III thought more often than others that the Boston Tea Party had been a violent event. It was not. Even history students, even though they explicitly said they'd be aware, games can't be trusted when it comes to history, that awareness just isn't always enough, obviously. Now, I don't want to pretend games could singlehandedly turn players into Holocaust deniers. Hopefully, most players know well enough that Nazis murdered about 6 million Jews, as well as other people they considered inferior. But I do believe games can easily tell a story that is compatible with such knowledge and still dangerously wrong. The idea of the clean Wehrmacht, a widespread belief, not only among gamers, that the Wehrmacht did not have much or anything to do with the Holocaust and other atrocities. That's exactly what many games show, even as the story occasionally mentions the Holocaust, that seems to happen somewhere else, like none of your business. Nazi soldiers actively, directly and knowingly supported systematic genocide, especially in the East. To be clear, I'm not talking about collateral damage, but intended extermination. In Steel Division 2, for example, you can lead the Nazis on the Eastern Front, fight your battles, soldiers against soldiers, tanks against tanks, and see nothing about the thousands of murdered civilians there. Emitting this side of the Nazis' military campaigns is credible to an awful lot of people. It even reinforces common misconceptions and makes glorification of Nazi soldiers seem acceptable. Several historians critical commented those tendencies in games over the past years. If you want to read more on it, a good and brief article by Eugen Pisser is available online in English, including hints for further reading. Just search for Showa in digital games, it's really easy to find. Steel Division was just one example. Company of Heroes, Hearts of Iron, Battlefield V, they might all be great games gameplay-wise, and I really don't think the developers had bad intentions. But they all have similar issues with how they depict Nazis, and at least the Wehrmacht. So, that's the bad stuff. But you might know that some games are much worse in that respect, some even glorify the Nazis because of their atrocities, but those are generally shitty niche products, I'm not even talking about here. They exist, they are bad, but I assume that's abundantly clear and obvious. So, let's ignore these games for now. On a more positive note, game developers seem to be increasingly aware that you should not just turn a blind eye to the Holocaust in a World War II Context. The recent installments of the well-known First Person Shooter series, Wolfenstein and Call of Duty, are maybe the most surprising. They are clearly referencing the Holocaust and the Nazis' antisemitic ideology. In Wolfenstein, the New Order, you're even at a fictional concentration camp, and other historical concentration camps, Auschwitz und Birkenau, are mentioned there. But still not much, and it might not be done perfectly. But I consider them honest attempts to raise awareness. Maybe it's even a trend to be continued. Already more daring than the big companies in that respect, are smaller indie game developers. Several indie games moved away from armed conflict, focusing the life of civilians instead. Atemtart 1942, and Through the Darkest of Times, are two recently published games, telling stories of civilian resistance against the Nazis. Of course, the gaming experience in these cases is wildly different from Wolfenstein or Call of Duty. Those games won't make you feel great and powerful, and you won't be able to crush the Nazis. They are doing a much better job at showing what Nazis are about, rather than drowning problematic topics in a sea of soldiers, guns, tanks, and power fantasies. I'd also like to point out the possibilities of fictional and more abstract approaches. Nazis don't have to be strictly historical Nazis, and again Wolfenstein shows that in several ways. Important parts of Wolfenstein's setting are fictional. They've got high-tech gear or even occult powers. They are about to win World War II or even did it and conquered the United States. The image above shows them during the parade in Roswell. Unlike in Steel Division, Nazis are not in the game in order to make it feel historical. The developers don't even consider shooting Nazis a political statement. Für them it goes without saying that Nazis are bad, so they are just used as stereotypical enemies. Instead of homicidal aliens, robots, or demons, Wolfenstein uses Nazis. Shooting them just seems like a no-brainer, and if you are using Nazis like that, deviations from historical Nazis don't even matter so much. The lower screenshot shows Nazis from the German version of Wolfenstein 2 without the most prominent Nazi symbols. Because showing them in games used to be prohibited in Germany. Now this developer did much more than needed. They also removed Hitler's signature beard and renamed him to Heiler. It's all a bit weird and funny, but it also leads to the question, what do such changes do to the game? If you remove the symbols of Nazism from stereotypical enemies without context, that would take away the whole point of fighting them. But the Nazis in the new Wolfenstein games still have enough context, even without their symbols, they are still obviously cruel, anti-Semitic racist supremacists. You'd still have plenty of reasons to fight them. That also means games and stories in general can deal with Nazi ideology and fascism in general without even being about history or ever using the term Nazi. Here are two screenshots from the game's Caves of Cut and Shadowrun Dragonfall. Caves of Cut is inspired by old rogue games and ASCII graphics, so the graphics are very abstract. The fascists there don't look like Nazis, and they are called Putus Templar. But their description is clear enough. They are an order of knights who loathe mutation in any form, they are among last true men, and they have pledged their lives to eradicating the mutants' warms that creeps toward their homes from all sides. Das ist nicht eine rassische Ideologie wie das eines Nazis, aber eine Unterschiede zwischen puren Menschen und menschlichen Mutations. In any case, es ist noch eine Form des Supremesismus mit genocidischen Goal, in dem es klarer nach Nazi-Ideologie resenbt. In Shadowrun ist das Humanist-Polli-Club, das nicht nur Nazi-Ideale wie das Das ist das, was diese Leute hier sind. Die Nazi-Inspiration ist hier ziemlich wichtig. Die Abstraktion von historischen Nazis hat auch mit Ideen und Aktien geholfen, als die offizielle Charakteristik der Nazis. Die Nazis sind nicht für die Mitglieder der historischen NSDAP reserviert, und die Nazis in der realen Welt heute nicht so genau wie die Nazis in Deutschland. Vielleicht sind sie jetzt mehr über Muskeln und weniger über die Gäste, aber sie müssen sich nicht selbst als Nazis nennen und sie können nicht auch Hitler bewerben. Aber sie haben immer wieder den gleichen Teil des supremisten Beliefens, was tatsächlich Nazism oder Faschism beobachtet. Und dass die occupierenТакtien sowie das spaghetti oceans häufig gar nichtrireswechselasitieren, Sachen obrigado auch um�maleия. respected 임alt die tank è Ш. ли100 Sec. Ricken Sie tenden zu basieren auf Supremesismus und Content, aber weiter für das Radical-Reberth der Nation, in dem sie sich in ihrem Ausdruck verwendet, ein irgendwie Radical-Pirsch verwendet. Und dann gibt es ein dritter Punkt. Ein sehr reparatuerender Ästhetik. Nicht exklusiv die Charakteristiken der Nazis, relativ schmutzig von ihnen, aber sie sind sehr komisch in den Popularen und Games in Partik. Sie sind nicht genug, jemandem zu nennen, aber sie haben noch eine Influenz. Ich komme zurück. Diese drei Punkte müssen sich nur auf die Folgen klären, in denen es nicht mehr historische Nazi-Symbolen gibt. Jetzt schauen wir uns an die Gaming-Communities. Die Existenz der Nazi-Leitz-Team-Profile hat vorhin vorhin beantwortet, aber ich habe mich entschieden, zu finden meine eigene. Nicht nur um ein paar frische Existenz zu bekommen, sondern auch weil die jene, die nicht lange gefunden werden, z.B. für die Stimme der Weihleitung, die Termine der Service, die anderen haben sich einfach geändert oder auf private. Auch viele Gruppen sind jetzt nur verdient, ohne einen neuen Kompetent in den Jahren. Die Erfindung von Existenz ist noch ziemlich leicht. Wir müssen die Gruppen auf die Welt, die wir haben, auf die Welt zu suchen, und die Gruppen zu sehen, die die Gruppen zu schliessen, und dann weiter so. Wir gehen von einem Profil an den anderen. Wir starten mit Rommel. Das ist ein schönes Schreien zu erwinnen. Man kann den Laufpfort in diese Form fühlen. Rommel ist ein guter Fitt für die Idee von Kleine Wehrmacht. Er ist nicht bekannt für Krieg- und Antisemitismus, aber er hat strategische Kiste. Er hat nicht sogar alle Fittler-Ausgaben beobachtet und wahrscheinlich wurde er gegen ihn in 1944. Aber lassen wir uns klar sein, dass er den Titel meistens auf strategische Fragen und den Titel beobachtet hat, bis 1939. Auch wenn er nicht wöchentlich supportierte die Nazi-Attroffitis, er hat nicht wirklich sie auch verabschiedet. Er hat definitiv die militäre Erwaltung der Nazi-Germany unterstützt. Aber ich glaube, du kennst, warum er so ein guter Fitt für die Fans der Wehrmacht ist. Auch die, die sich nicht selbst als Nazis betreffen. Es gibt mehr als 2000 Rommel-Profil-Signen auf dem Steem, 2400-Jährigen. Und zumindest ein paar davon sind für die Faschismarier ein wenig zu spät. Der Erschiefen, der hier neben ihm ist, ist für alle Nationen in Hartzburg und Faschisten. Der Spieler ist ein Mitglied in einigen Gruppen, die voll von antisemitischen Kommentaren und die Erleichterung des Hitler-Berks. Er ist auch ein Steemgruppe der identitären Bewegung, ein aktuell, weitreichendes Gruppen, aber auch ein aktuelles politischer Agent. So, genug für Barbara von der Kirchen-Signen. Das ist ein Referenz zu einem Charakter, der von dem Film Cross of Iron. Um zu machen, ist er Fiktionell, ein Fiktionell-Repräsentativ der Clean Wehrmacht. Beide die Bücher und die Filme haben zumindest ein sehr leckeres, lege-entirektes Warton. Diesen Profil, aber nicht so viel. Ich meine, all die Waffen, die die Wichtung ins Signen sind, die Rückgründe, die Flags, die Röhrmacht und die Flags sind immer, und natürlich der alte Nationalantherm. All das ist nicht für den Charakter Steiner, der nicht über Glorifier, Fyingbohr und Nazism, aber auch über sie verspricht. In seinem Profil, der Erster sagt, er denkt, das Profil ist cool. Und obwohl er uns nach dem Nazism schreibt, ist das definitiv nicht der Fall. Cool. Das ist eine gute Erkennung, um zu pointen, welche Tools die Spieler auf Steam sind. Weil die Rückgründe, die Batches und die Emotionen nicht geniesslich verwendet sind, aber alle von Spieldevelopern, die werden von Spielern erneut und verhandelt werden. Einige Spieldeveloper haben sich sicherlich gedacht, es ist eine coole Idee, die alle die Batches zu Spielern geben. Und wer könnte die Störche resistieren, um das Potential zu benutzen wie sie percentuell besserbasiert haben. Das brauche ich in тр зарne paso swings. Ich ansピけど... Ja, was thickness安 y allocated てВНН decomposition TOB cheese BTW. Es der Warte, fangen wir zu schnell von hier. Hier ist eine kleine Kollektion von Portreys, alle von einem und der selben Steamprook, die Sturmlokale. Die erste ist ein fräuer Cruiseda, oder ein Templar, der parteiert von mehreren Antimuslim-Gruppen. Es gibt mehr zu sagen, um den zweiten User zu sagen. Sein Name könnte ein bisschen komisch sein, und es wird freundlich geändert, aber die Foto ist ziemlich verträumt. Das ist Adolf Eichmann, einer der Hauptresponsibles für die Holokoste, der in Deutschland fliegt, am Ende des Weltkrieges II, und der über zwei Jahre, wenn ich das genau erinnere, mit den Diktatoren und die Argentinien, und der anderen, bis er in Israel getroffen und versucht war. Der Israel-Kanzler hat auch den Kontext dieser Quote gegeben. Um es zu summen, muss ich sagen, dass ich nichts verabschieden werde. Das ist nur der eine der größten, destrige und verletzte Nazis, die dort ist. Der dritte Profil ist natürlich nur ein paar Nazi-Propaganda, von der NSD-Nazi-Studien-Organisation. Mit einem partially kürzenden Hakenkreuz, seht ihr es noch in der Korne. Und der letzte ist Gerhard Barcon, einer der größten successfulen Pylons der Nazi-Germany, verabschiedet von der Südlinie. Der Granpaar war ein solcher, nicht krimineller. Wir haben einen anti-Muslim-Kruseida, ein unrepentant Genocide Nazi, ein actual Nazi-Propaganda, und die Belieferung und die cleanware macht alle in einem Grupp zusammen. Und die ersten zwei Profis, ich habe sie nur zwei oder drei Links von ihnen weggegeben. Also sie sind nicht Teil der Gruppe, aber ich bin da von diesen zwei. Ich möchte keine openen, rassistischen oder antisemitischen Kommentare geben. Aber schau dir sie selbst aus, wenn ihr nicht glaubt, sie können diese Profis und Gruppen finden. Es ist ziemlich leicht zu finden. Auch als gesagt, Diskussionen in Stiemgruppen haben meistens geliefert, über die letzten Jahre, also das Teil ist nicht sogar das wichtigste heute. Aber ihr könnt noch sehen, dass die Farhräder eigentlich mit den Menschen mit dem Fürschmartungsutschab burdensectorarm mehr secretive und radikale, weniger parlamentarische, aber mehr bio-winter Begriff. Also, diese Leute sind nicht nur Spielgäste. Link ist jetzt nicht verwendet, und Discord hat mittlerweile ein paar Fahreitschritte-Diskord-Service, auch diese. Während der Discord-Gruppe nicht mehr existiert, gibt es mehrere Profis, die offenbar verabschiedet sind, aber die Bewegung ist noch aktiv auf Steam. Es ist klar, dass das nicht nur ein Problem auf Steam ist, sondern die Ich oluyor in mir. Ich bin nicht froh mit jemanden, welches habe ich auchşوب решил. Das von wiz-Discord Paint wird manarin abgeordnet. because EA danach gesch coalitionnet. Denn desktop nicht nur nur was du sauber lotus. sehr gut playlistiert. Siedd.. Vertraue die Nicht das alles entboxiert, aber ein Spieler hat noch eine Hakenkreuzung mit einem Barsch, bevor er mich gegen mich verletzt. Ich habe mich auch vor dem Hakenkreuz gestorben. Aber es ist einfach so einfach. Wobei du etwas in einem 5v5-Talgerin in ein paar Hakenkreuzungen fängst, kannst du dein Nazi-Marg. Es gibt nicht viele Symbols, die einfach genug für das sind. Es ist einfach zu holen, einfach zu bemerken. Ich meine, der Auto ist nicht genau ein artistisches Masterpiece, und es ist noch wirklich populär. Manchmal kann Nazi-Streifen einfach ein einfaches Weg sein, um die F***k zu bekommen. Oft wird es in diesen Jahren erheblich. Die Diskussion kommt zu einem Punkt, wo jemand fragt, ob das sogar relevant ist. Ist das wirklich serious? Wissen Sie, dass die Nazis nur Nazi sind? Und ist das nicht nur eine Minorität, die zu viel attentioniert ist? Ich kann nicht genau sagen, dass es viele sind, aber es ist ziemlich safe, zu sagen, dass in jeder öffentlichen Gaming-Kommunität mit mehr als ein paar Hundert Menschen, die Nazi-Content finden werden. Auch in der relativ guten Community von Tooth and Tail, auch wenn ich gegen sie aktiv arbeite. Manche sagen es einfach so, dass es eine Art Role-Play gibt. Zum Beispiel, weil sie in einem War-Game-Klam spielen, und in vielen KS ist ein Teil Exis und sie machen es mit Style. Auf dem Boden des Deutsch-Englisch- und Spezial-Währmacht-Gier haben sie eine gute Reputation, die Menschen mit militärischer Stofffassionierung fasziniert. Also sieht man, dass ein Nazi-Soldier vielleicht sehr attraktiv sein wird. Sondern wenn man denkt, dass der Wehrmacht nichts ist, dann ist es natürlich nicht. Ich würde sagen, dass sie einfach die Läfte wollen. Sie machen es für die Lachen auf die Fähmung. Sie sagen, es ist alles ironisch, satire, oder so. In vielen KS, ich bin eigentlich überzeugt, dass die Nazi-Profile sich selbst nicht viel über die respektive Spieler politischen Stand sagen. Ich weiß auch, dass sie Läfte und Punks in der Nazi-Altarien sind, obwohl sie für die Kommunist-Refrengungen mehr leicht sind. Nur in ein paar Kasseln haben wir gute Gründe, um die Beliefe, die faszinierenden Beliefe zu bezeigen. Wenn sie zum Beispiel eine live-far-righte Organisation sind, dann sind sie die Organisationen, die sie erwähnt haben. Und sie und ihre Mitglieder werden definitiv mit Nazis beantworten. Und es ist sehr leicht, nach Nazi-Anhaltungen zu navigieren, um solche Indikationen der politischen Politik zu sehen. Das bedeutet, dass sie nach Nazi-Anhaltungen für eine wesentliche Grunde helfen, in viele Weise. Es macht es leichter für Nazis, um die Leute, die sich auf die mostly Princess-P according meisternde und auf IRR und kann sie in den behind the excuse of it's just fun and free speech and the general normalization of Nazi references. On top of that, Nazi references might be easy to shrugger for privileged white guys like me, but being confronted with stuff like that on every other gaming server makes it a really uncomfortable environment for players, who actually are the immediate victims of Nazi ideology. So, for whatever reason, public Nazi roleplay is done. Stop it! It's harmful in many levels, even without any bad intentions. Now, even if you don't look or talk or act like a Nazi anywhere, there are still things you can change to make gaming better in that respect. Let's start with game development. Be more conscious about which mindsets the different aspects of the game are catering to. Objusly, Wolfenstein is a game about killing Nazis, but in other ways, it's not really far from being a damn good game for fascists too. Stripped of the explicit ideology, it's about the struggle against the dangerous and monstrous enemy, but a brave, strong and somewhat stoic warrior will eventually defeat them anyway to purge and restore the nation. That warrior is a muscular, blonde, white man, of course, often wielding two weapons at a time, pretty typical power fantasy. Can't be read as an ironic twist, that the Nazis are being killed by a guy that happens to look like they want him on their side. But as far as I know, that interpretation is rather new, and when the protagonist's face was first revealed, the vast majority of action heroes just happened to lick exactly like this. In the same way, it is likely a conscious twist, that the game uses fascist aesthetics on the cover image. All black, white and red, with one man in the middle standing tall above the defeated print by blocky architecture. Here the Nazis' unobjusly goes down, the protagonist is standing on cleansed ruins, and the dawn of a new, better world can be anticipated. Might be conscious, but structurally, this is very close to what Susan Sontag described as typical fascist aesthetics, based on linear Riefenstahlfilms. Here are some quotes, you don't necessarily find all of that in a single game, but you will probably recognize many aspects in a variety of games. The Nazi films are epics of achieved community, in which everyday reality is transcendent through ecstatic self-control and submission. They are about the triumph of power. Just think of the totally obedient armies and smooth production lines, like you generally have them in strategy or otherwise tactical games. Everyone is doing exactly what you want, and the perfect execution of your great plans will make your faction victorious. Special action games are never about normality, they're constantly fighting for a cause and an epic struggle. Next quote, the contrast between the clean and the impure, the incorruptible and the defiled, the physical and the mental, the joyful and the critical. The topics of corruption and purity are very common in games, especially if you look at one of the many games inspired by Lovecraftian horror, who has been quite restless himself, but also others. For example, I used to play a lot of Blizzard games, and seriously, corruption and cleansing is such an endlessly recurring topic there. Yeah, I got sick of it, eventually. Physical and joyful versus mental and critical. Also, that's an antisemitic trope. It is somehow reproduced whenever you fight mad scientists and big conspiracies, but complicated plans to take over the world. But you as a player, you are standing with both feet on the ground, just doing what's right, what needs to be done to purge the land. Ideally, you enjoy the battle, of course, while your character makes some cool remarks on top of that. Yeah, piece of cake. Yeah, you know that. Only three years ago, a dissertation has been published on how fascist aesthetics live on in the popular culture of the present. The author, Jelena Yasso, she noticed that fascist aesthetics are very often stripped of their meaning and used in totally different contexts than actual Nazis would have used them. The only abstract connotations remain, something powerful, something wicked, somehow impressive and fascinating. Obviously, that makes fascist aesthetics useful for games too, not only for depicting the enemy, but also for a somewhat edgy player character. Fascist aesthetics are not necessarily linked to fascist ideas. In popular media, they are regularly completely superficial. But they still come up with some issues. It makes it way harder to safely identify seriously fascist content at the first glance, remember the player profiles. And these aesthetics still fit particularly well with fascist ideas. And the third issue is, all of this makes games particularly attractive for people who embrace Nazism, but also for people who might have had negative experiences with Nazis. Now, you can probably imagine, that if someone does not quite identify as Nazi, for example, because they think Nazis were actually Socialists and Hitler just fucked up, and that's all that wouldn't wrong, then they might very well enjoy Wolfenstein too. Und ich bin nicht ein Nazi, ich bin nur ein Kritiker der politischen Islam, Kultur, Marxist, Messiah und Ressism gegen Weite Menschen und all das. Sie sind nicht so ähnlich wie Wolfenstein-Nazis, weil das Spiel nicht für einen Kommentar auf heute's Formen des Fascism ist. Die Entwicklung hat das klar gemacht. So, heute's Fascism, die Sie wahrscheinlich nicht mit den stereotypischen Enemies in Wolfenstein identifizieren, ist ein guter Spiel. Die Geschichte und die Hürde sind natürlich gegen Nazis, aber auch nicht ganz antifaschistisch. Das führt mich zu den Fragen, wie können wir mehr nach Hause gehen? Ich habe vorhin nur ein gutes Beispiel zu sprechen, das ist durch das schwarze Teil des Tages. One of the developers himself, Jacke Friedig, already talked and wrote about how thoroughly anti-fascist design can work. Through the darkest times went for an unusual artstyle. Inspired by art, Nazis prohibited and ridiculed us degenerate art. This art is inherently anti-fascist. Nicht nur weil es eine Geschichte ist, sondern auch weil es viele Städte und Städte befestigen. Es ist nicht ideal, dass es schön und braunen Körper gibt, aber es entdeckt die Leckerseite der Leben, die Inspiration, die Angst, die Schmerz, die Schmerz, die Verbrechung. Die Körper und die Fähigkeiten sind oft mit körperlichen Städten, wie im Woodcut-Portrait, nicht zu elegant, nicht zu wellproportiert, wie die klassischen Ständen. Verzeiten und zu zeichnen, ist es eine Art von humaner Unwahrheit, wenn man das so nennt. Es ist einfach ziemlich klar, dass das kein Spiel für Faschisten ist. Der Spiel ist nicht nur ein Powervolle Hero, sondern auch ein Gruppel der Zivilien und Resistenzfighter, die einfach was tun, was sie tun, ohne sie zu kommen. Sie werden nicht über die Nazis überwiesen. Aber wenn du nicht die Deutschland ausgerissen und die Verbrechung von Utopia nimmst, dann musst du dich nur versuchen und helfen anderen und bei dir nicht mit ihnen wegkommen. Es gibt also viel mehr als nur die generelle Situation, die im Spiel antifaschistisch sein kann. Du musst nicht alles in das Leben gehen. Du musst nicht komplett die Idee der Weltweite von Schüttern und Strategien kreieren. Aber zumindest denken, wie sie unter der Surface der Fantasie weniger sein können. Da Andreas in eine Fiktionale Setung. Die Leute, die in diesem Spiel viel zu verabschieden haben, eine verbindende, inklusive Kommunikation auf dieser Stelle zu schaffen. Und sie haben einen richtigen Stand auf die meisten Kommunikationen. Hier sind ein paar Quotes aus dem Code of Conduct. Um eine pläschende Atmosphäre zu erhalten, wo die Leute sich auf die nine Jimionsowanyen am liebsten allerletzenden die ich rapt inspectoritäre GPS verierte, ich compensate wahrscheinlich das und besonderes ist, well, an edgy guy doing virtual blackface and such, who actually liked the game, so that many of his followers, but some of them didn't like the existing policies in the discord community, and they've been quite nasty about it. So they entered the community and did what they were used to. One of the favorite topics, which also became a part of review bombings on steam, is that the Pewter's Templar are not the faction you can join in the game. So out of the 70 factions in that game, you can join exactly one, but those people demanded to play the fascist Templars, and they ran to the about it on many channels and they weren't nice about it. So the developer and Maldorators eventually shut down such discussions, because, as they say, or as the developer said on steam, for every one nuanced discussion about the Templar that's lost, 100 more flower in its place, because the mods created a comfortable space for people who don't feel welcome elsewhere. And I guess that's true. Below you can see one of several screenshots of how things look on the unofficial servers. Unofficial Discord servers created by players who liked the game, but not the policies. And it's a mess, full of Nazi references, surprise. So yes, you can, and you should be intolerant against intolerance. That's the bottom line of Karl Popper's paradox on of intolerance. And it seems like some gaming communities urgently need to be reminded of that. The existing community on the official case of cut server overwhelmingly supported the developers and moderators in all of this by the way, because the community has been built around such principles. And they wanted to protect that. It is important to start caring for an inclusive atmosphere early to oppose hateful comments, even when they're just a joke, not when things are about to get really messy. A community that already got used to toxicity, probably does not want to change anything about that. Because the people who care, well, they already left, while toxic users made themselves comfortable. That's the usual climate established in many gaming communities. And there is already way harder to draw a line against fascist content. I'm coming to the end. So let's sum up some of the most important parts. The parts that ideally all of us should actively communicate in gaming contexts. First of all, please remember that even ironic, Nazi roleplay is not harmless. Normalizes indicators of fascism, and thus provides a great place for people who actually believe in fascist ideologies. At the same time, it directly harms diversity, making gaming communities a bad place for people affected by hateful ideologies. And sticking to all of that edginess, because you think it's about free speech, actually just makes gaming communities look bad. Really bad. And it's no surprise that many people are generally suspicious of that. Whoever you are, there are plenty of reasons to speak up against signs of fascism in games. There's no need to condemn everyone who's doing tasteful shit. Many of them just really seem to be insensitiv towards issues associated with Nazi representations and all that. So we've got to explain why it is harmful, even if they don't mean it. Immediate success is rare in that respect, especially the criticized person is more likely to defend directions by all means. But sometimes others are watching and might be more willing to change their mind. And often they just need time for self-reflection too. Anyway, there are signs of improvements all over gaming communities in recent years, despite all the noise coming from the far right, not only in gaming contexts. So there is a need for action and awareness, but fortunately, there are also some good reasons to be confident, but gaming can indeed get better with less Nazism. Thank you. Okay, thank you so much. This was Nazis in Games. And we will have a Q&A session mit Manuel an der Restreality Channel. And here on Channel 1, there will be a news show. And then at 7, there will be a talk about Neues aus der Gesundheit. Sorry. Sorry. Okay, thank you so much for listening and see you later. Okay, hallo Manuel. Hallo. Kannst du mich gut hören? Ja, sehr gut. Okay, wir werden in Englisch gehen. Sorry. Okay. Thank you so much for the nice talk. And we have a few questions actually. So question one is I will have to translate them simultaneously one second. Is it possible that gaming platforms like Steam aren't really interested in checking their content because they would lose users that are actually looking for the Nazi Games? Ja, that's not quite clear. I mean, the impression I've had so far is that a specialist team has been very careful not to touch community moderation at all for a long time. And they've also been very intransparent about their own rules. So they don't really have a clear set of rules that says we will delete this and that content. But they keep the freedom, I'd say. However, I think I've noticed that they have actually increased content moderation on this team platform because I think I briefly mentioned that during the talk that a few groups I've I've still known from earlier have been actually deleted because of violation of terms of services. And well, some things have disappeared. And there's also been a far right game that should have appeared on Steam and it was already announced and people wish list a bit. But they at least made it hard for it to appear and eventually it got shut down completely. So I think Steam is changing on that policy a bit and they're trying to do more content moderation, but they don't want to get have any responsibilities or any terms that you can force them to impose. And I think that's also because they noticed that the people that are so much into those fascist politics are only a few. And most of them will stay anyway. So even if they're taking, if they take away the possibility to have any image, they won't. I've also seen much less Hitler Pictures and so on on Steam than last time I checked. So I think things are changing there a bit and probably also because they realize well, it's not that important for most players really. Okay, stick with the topic. Imagining that publishers like Riot Blizzard and so on would start censoring their content or their games. How could we make sure that other content isn't censored as well? Like, for example, there are Chinese publishers who censor content that is critical about topics regarding Hong Kong, for example. I think that's a good question and I think I don't have very good answers for that really. I mean, from our perspective, there's a pretty clear difference, I'd say, because when we are talking about the topic of Nazis and games, then it's about, well, that's what I called intolerance of the first degree. That means intolerance that is not directed against someone who actively harms you or someone who is intolerant against you. So you can be intolerant against intolerance and that's what fighting Nazism is about in many cases to break it down. But I get that this is hard to communicate internationally sometimes. I mean, you've probably also noticed that many of the profiles I showed you were probably not German profiles, but from all over the world. And there's really probably a bit of a gap in sensitivity for how far you can go with Nazi content. And so that's really a bit of a difficult topic. I am not even actually asking too much for to keep the companies who provide platforms accountable for censoring content. But I'd really rather prefer to see communities interact and be more critical about this, like not accepting the people you play with, if they do some really achieve much stuff. Because as far as I've seen in many communities, people who really feel a bit critical about Nazi content, just keep their mouth shut, even if some other guy runs around with some Hitler memes and so on. And that's the first place where we could really change a lot. So I'm really rather about mobilizing the communities to make a change there than to well, have big structures changed and ask them to moderate everything, except maybe really extreme cases. Okay, so you say that more about peer pressure or social pressure that could work. Yeah, I wouldn't even call it pressure actually. It's more about just being a bit more reasonable. I'm talking to each other and telling others that this is harmful shit. Why they should stop it. So the next question really fits here. What would you suggest to parents or siblings who know that their child or their sibling plays Wolfenstein-like semi-fascist games or even worse and get excited about that? So, what would you recommend them to do about it? I'm sorry, I didn't quite understand the question. So, is there something that you can recommend? For example, if my sibling or my child likes to play these kind of games, the semi-fascist games, they're not very obviously Nazi games and gets really excited about it and likes to play it. And I want to, well, of course, don't want them to play it. So, what would you, what kind of approach would you suggest? I think initially you mentioned something like Wolfenstein, in which case, well, probably Wolfenstein is not a really good game for kids, but actually from the fascist side, I wouldn't even mind. I mean, it's still mostly anti-fascist, but it's not doing really much. And I really wanted to highlight in the first place that it still can very much appeal to nowadays fascists, because it still has all the things they like in it. And the Nazis are really more like caricatures of bad guys. So, I'd be rather soft about those not really hard cases, like Wolfenstein. I mean, if they are playing some really bad games, those I didn't even talk about, stop it, because they're really intentionally glorifying Nazis. But in the other cases, I think just maybe talking about it, educating about the things that are missing, that are misrepresented in those games makes sense. But I really think that's hard for parents or something, because they really have to know the game well and have to know where it fails. So, I am afraid that's hard to do privately, but it really should rather go into the games themselves and into game journalism also. So, it just has to be a topic people talk about. Okay. Also regarding the same topic, like teenagers tend to like forbidden or illegal things and be rebellious. So, do you think that it could be an issue to put more and stricter laws in place or that it could actually have the opposite effect on at least on young people? Yeah, that's again such a case where I don't think laws and official restrictions are really the answer, but rather having the communities care about that. And yeah, I mean, at some point, there's something like not really laws, but just community guidelines that have to be imposed. That's rather the point where I would make the changes. And I think laws are just too slow to adapt, because even on Steam, if we go back to how Steam developed, I think, as I said, Hitler Portraits has become rare, but now they are going for all that, well, those clean Wehrmacht guys. And I've seen profiles where they even mentioned in the profile that Steam shut it down repeatedly and now they're back again with just this and that removed. So, they're changing their looks constantly. And I don't think any law could ever keep up with the speed they're changing and adapting to the laws. So, they're always just trying to keep a bit below the acceptable or below the thresholds, so they can keep going. And that has to be done in a much more flexible way than laws could ever do it. So, do you think that the right-wing propaganda comes from the community, from the gaming community itself? Or do you rather think that organized groups of Nazis are infiltrating the gaming community to recruit new members, for example? There might be some who are infiltrating, but for the most part, I really think there are just far-right players in communities. As I said, there are many reasons why games can be attractive to those people, because they somehow reflect their ideologies even, and they are appealing in so many Ways. Und, even if they are not, even Nazis want some fun sometimes, so they much as play it, and then they use the platform they have, because it allows it. I mean, there are few platforms that had so low rules or so flexible rules like gaming community platforms. That's why they organized on this part of the national, the Nordic resistance movement, for example, because Discord at the earlier times had very lex regulations and hardly acted ever, but they changed that in 2018, I think. Und do you think that the left, that the left-wing should get stronger in the gaming community, for example, to also maybe save some of the players from the right-wing approaches? I actually think it is strong. It's just not always very outspoken. Yeah, I mean, we often have kind of a separation in gaming communities, I think. There are some that are just overwhelmingly overrun with troll phases and paper and so on, and of course also some Nazi memes. Und ich hätte sagen, people like us normally just keep out of those communities, because we wouldn't have them there. But there are also some communities that have a lot of left-wing influence, like for example, Case of Cut, really has a lot of left-wing influence in a positive way, really. Und ich denke, diese Communikationen gehen wirklich zusammen, und die öffentliche, breite Führung ist durch diese schrille Nazi-Memes und so auf. Und ich meine, die Sache, was wir wollen, ist nicht mit unseren eigenen Memen, vielleicht auch Positivität-Memes, sondern es ist eher, dass die Menschen die toxischen Dinge, die sie tun, das würde schon ein riesiges Schritt vornehmen. Okay, letzte Frage. imagine, ich spiele mit einigen Leuten und sie machen immer Spaß oder beraten mich, wenn ich mich aktiv gegen die Spieler gegen die Nassivisten oder die Spieler, die Nass-Expressionen benutzen. Was kann ich mit meinen Freunden und den Leuten, die ich mitgegeben habe, mit denen ich mag? Ich möchte nicht nur sie verletzen oder sie verlieren, sondern ich möchte sie ändern. Hast du was, was du dir sagen kannst? Ich meine, das ist ein ganz wildes, aber viele Möglichkeiten, natürlich. Ich würde sagen, dass das ein ziemlich guter Startpunkt ist, um etwas zu erreichen, weil wenn du mit Freunden spielen und die Gruppe die Dinge problematisch ist, ist das eine der Karten, in denen du eigentlich ein paar Chancen mit ihnen vertreten kannst und ihnen zu erklären, wie die anderen Leute problematisch sind und warum sie verletzen sollten und warum sie hier etwas stoppen sollten. Und das ist total schlimm. Ich denke, dass das wirklich von meiner Erfahrung mit den Gaming-Kommunitäten so weit ist. Ich bin ein Moderator, zum Beispiel in den Communitys, in denen alle die Leute auf die Nassivisten poppen und die Dinge verletzen und die Transmissur oder Ablesen oder so etwas, was einfach... Ich würde sagen, dass in etwa 50% deryty man jedenfalls Ent buckle residentiales reden. Ich glaube, es muss sich am besten bewahren. Das RAM und Und at some point you just have to realize that it doesn't make sense to stick with them. So, yeah. That's just different cases. You have to try it. I usually try it softly. And quite often it works. I've been surprised how often it works actually. But sometimes, yeah, you don't have a chance to do that online. Theoreticalization just works much better if you know people in real life. And you sometimes just don't have a chance to do that online or in the community. Okay. Thank you so much. Thanks for the very interesting talk and also for answering my questions. Thank you so much. Thank you for the opportunity. And play in a second. Okay. And thank you very much for all the questions. Thank you so much for sending the questions, for staying with us. And bye.
|
The depiction of Nazis in games tends to downplay their atrocities and facilitate the normalization of fascist aesthetics and ideologies. Even in a playful and lighthearted context, this normalization has consequences that can and should be avoided by everyone in gaming communities – including developers, content creators, and players. World War II might be just one of many historical settings in games, but the representation of Nazis makes it a particularly tricky one. Especially if developers are trying to sell an entertainment product and provide players with all the freedom they might wish for, they frequently decide not to depict the Holocaust, for example. Such a decision, however, contributes to an embellished picture of Nazi Germany, which is then offered to players – some of which are apparently craving Nazi content. Their ideology, weaponry, and aesthetics seem to appeal to certain players in a peculiar way, as can be seen in user-created content, profiles, and discussions. No matter what the intention behind such behavior is, the abundance of Nazi representations in gaming communities has harmful consequences. By keeping the line between role-play and actual propagation of fascist ideologies blurry, gaming communities make themselves a rather uncomfortable place for all but the most privileged of players, while at the same time offering an excellent playground for actual fascists. Players, moderators, and developers regularly face backlash when advocating for more inclusive communities – at least whenever making a positive change would require dropping supposedly “edgy” behavior or content in an environment that has long been used to it. Tackling these issues demands more awareness and responsibility from all involved parties, from the players to the developers and their marketing divisions. Luckily, we could recently witness various good practice examples of ways to oppose fascism in gaming. Such examples are still rare, they require effort and dedication, but they harbor promise of a better future for gaming – for everyone except literal Nazis.
|
10.5446/52350 (DOI)
|
Hello and welcome back to the R3S of the RC3 in Mornheim. One thing that came up in the IRC that was unrelated to any talk was about the little display you see here next to the Winkercutze. What this is, it is a CO2 indicator so we can see when we have to ventilate the room. So one of our producer, the nice guy who does the video here bought this so we can see when we have to exchange air to prevent the spread of aerosols and so on. Anyway if you have any questions regarding our stage or the talk please feel free to join the IRC channel RC3-R3S on Hackend IRC or use the hashtag RC3R3S on Twitter and Mastodon or use our handle at chaos.social on Mastodon. In our next talk we are going to stay with artificial intelligence and with Garns. Now in English as you may have noticed our next speaker is normally Hans the Hacker Space Engend. She does a master's thesis on Garns. She is very interested in the ethical aspect of what this technology can do. So please have a very warm welcome for Lisa Greenspecs and her talk. But this politician said XYZ. Hello and welcome to my talk but this politician said XYZ. I want to talk today about the technology behind deepfakes and its ethical implications. So if this was a live talk I would have asked you to raise your hand if you knew this person and I would have expected nobody to raise their hand and if they did I wouldn't believe them because this person does not actually exist. This person does not exist.com is a homepage launched in February 2019 by software engineer at Uber and every time that you refresh the page another phase shows up which was generated by the same technology that is behind the deepfakes which are called the generative adversarial networks. Next to this person does not exist there is also this cat does not exist.com. So you could argue that this cat does not exist at the same time. To my disappointment this doc does not exist yet. So maybe that's a test for later. So let me walk you briefly through what I'm going to talk about today. So first I'm going to explain to you what generative adversarial networks actually are. Second I want to give some use cases what Garns are already used for and then we're talking about the downsides of Garns. So for example deepfakes but also other negative use cases of deepfakes. So what are Garns? Garns were introduced by Ian Goodfellow in 2014. Ian Goodfellow is an ex-Googler and now is the head of machine learning at Apple. He's also a former PhD student of Andrew Ang who is a very popular figure in deep learning. And then this treat here on the right you can see a post of Ian Goodfellow in 2019 about the evolution of Garns. So we started on the left with a very pixelated black and white picture of a woman in 2014. We go through the years up to 2018 where we already have a very photorealistic picture of a person that does not exist generated by a computer program. Now as you have seen in this person does not exist now we even have hyper realistic pictures of people that we can't even distinguish from real people anymore. So what are Garns? Garns are short for generative addressal networks and Garns consist of two neural networks competing against each other. So on the one side we have the generator that generates an image or audio or video for example and is also sometimes called the artist. The discriminator on the other hand discriminates an image or between audio so it's called the art critic so it's telling whether an image or whatever other input is realistic or not. So that's a lot of new words so let me walk you through them. So what is a network actually? As a disclaimer this is a very simplified view so please fellow machine learning engineers don't touch me on that. So a neural network is based on the idea of human brain physiology and each note in the neural network would be a neuron in the human brain connected to other neurons forwarding and transforming information. Neural networks are a part of deep learning which is for example part of the hidden layers in the neural network and deep learning is a part of machine learning which is a part of artificial intelligence mimicking the human brain's intelligence. So a neural network typically consists of three main parts we have the input layer, we have one or more hidden layers and we have the output layers. So in for example our input layer could be an image of a cat that would be the RGB values, the pixel values of the cat which are getting forwarded to the one or more hidden layers and the hidden layers are doing some sort of feature extraction. So simplified you could say that the hidden layers are checking whether there are pointed ears or acute nose, whiskers or the typical eye shape of a cat and it's getting these kind of information forward to the output layer which calculates a probability how likely it is that the image that we put in is actually the image of a cat yes or no. And this is basically what our discriminator is doing. Our discriminator is the art critic that sees an image for example of a cat and it's supposed to tell us whether it is indeed the image of a cat or not. Our generator on the other hand works the other way around so it gets so-called noise as an input layer, noise are randomly sampled values, it forwards those to hidden layers which are supposed to form ears, eyes, snout and so on and it transforms that into a pixel values to generate the image of a cat. So how is this working together now? The generator who only gets random noise at the beginning starts to draw very random stuff that can be plops, black and white, lines all over the place and it forwards those generated images to the discriminator. The discriminator who does not know yet what a cat is then makes a guess. Is this quickly lying here a cat or not? So in this case let's say it gives the information no this is not a cat back to the generator. The generator then knows oh okay well I have to change something about that so it keeps trying and trying and trying until it gets closer to what an actual cat is supposed to look like. The discriminator is not only learning through the generator and its output but it's also learning by getting real images of cats. So the discriminator is getting the fake images of the generator but the discriminator is also getting images, real images from our labeled input data. So every time the discriminator sees a picture it makes a guess so yes or no is this a cat and then it gets feedback from the system by okay this is a real image or this is a fake image generated by the generator. And the discriminator's goal is to be able to differentiate the two to say okay this is fake and this is real and the generator's goal is to make pictures as realistically as possible. So these are our two neural networks the generator and the discriminator fighting or competing against each other so that's the adversarial part of generative adversarial networks. So this process keeps going on and on until the generator can generate pictures that are indistinguishable from our real images. Some of you might have seen this rather popular gift already from an paper from Züerdal in 2017 where our input were moving horses and the input was images of seabrows and the Garnes goal was to map the pattern of a zebra onto a horse and while this looks very funky if you took a screenshot of it it would to most of us at least look like a zebra. So now that you know what Garnes actually are what are Garnes actually used for what are they useful for. So they are for example used in Madison for example to reduce noise in images so fragments that are not supposed to be there they're there to up sampling images so in case we have a low resolution that upscales the resolution we have classification is it this or that we have segmentation and we have object detection. So here in the lower left image you see pictures of an eyeball and a Garn for example could extract the an image of the blood vessels in this very eyeball which would then could be used to diagnose something or to see whether everything is working fine. And on the right image you see MRI scan of the brain and the Garn would be able to detect abnormalities in the tissue that could give hint to a disease or something which might not be visible to the eye or could at least save time in the process and resources. So science is one big point where Garnes are already used for but Garnes are for example also used for ours a video game so here we can see that it has been used in the Legend of Zelda from 86 in a paper from Terada from past year that a Garn could generate new levels in this game and about 60% of the levels that the Garn generated were actually playable levels so in these kind of levels you always have to have a certain amount of items you have to have a key, you have to have a door and so on and Garns were able to produce up to 60% of playable levels compared to other algorithms that from which only about 10% were playable. Another form of art beside video games are movies where deep fakes or Garns are already used so here the Reddit user DerpFake uploaded a gif of the face of Nicolas Cage put on to another actresses body from the same in the movie they were both starring in Man of Steel. Nicolas Cage's face on different bodies has gained quite some popularity in the recent years. And beside putting Nicolas Cage's face on other actresses and actresses bodies other users have shown that generative adversarial networks can also outperform CGI which might be used in the creations of movies in the long term. So here you see that in the movie Rogue One the young Carrie Fisher on the left side with CGI and on the right side with deep fakes or Garns produces a far more realistic and prettier picture. Another example is Robert De Niro's The Irish Man where he was D.H. since the actor was already 70 years at the time and it took Netflix about 10 million dollars and two years to D.H. Robert De Niro while it took one YouTube user about one week and his home computer So next to science, arts and video games there is another use case that most of you people have either used themselves or at least have seen on social media platforms and there are so called filters. We have aging, we are face swapping, we have putting bunny ears, cat ears, dog ears onto people's faces. They are all also created by generative adversarial networks. Another use case for example is Crea Therapy that has been used for the first time in this year where therapists have spoken with the voice and face through somebody who just passed away unexpectedly. So for example a father could talk to his just passed away daughter and work on his grief through that. So what are the downsides of Garns? Now we have seen many positive use cases, many useful use cases but Garns are not without any problems. So one big problem is bias and especially racial bias and here on the left side you see a pixelated picture of former US President Barack Obama that got upsampled by Nvidia's StyleGAN algorithm into a very whitened version of Barack Obama. Twitter user Osa Suva has used this algorithm a couple more times where you can see here on the left side the original images of the people that he used. So he first pixelated them which is the middle picture. Put them into the algorithm and the right column is what the algorithm puts out. So here you see a variety of our ethnic backgrounds and skin colors and their pixelated versions got upsampled in a very whitened version of them. So another problem is that Garns that only produce pictures with bias but similar techniques are used to predict the probability of which people who are accused of crime would commit crime again. And this also shows a substantial racial bias with people of color who are getting longer sentences because the judge would use such bias software. Another problem with Garns is that they can be used to create fake identities. So for example social media bots are getting more and more realistic and are used to influence people's political opinions and decisions. So Facebook removed over 900 accounts which spread pro-Trump propaganda to about 55 million users. Facebook held a coding challenge to develop an algorithm to detect fake images which they called the DeepFake Detection Challenge in December 2019 and Twitter for example said that their marketing tweets that contain fake images and wants the user when they want to share the tweet with a fake image. Most of the algorithms that are supposed to detect fake identities or deep fakes are typically also based on Garns. One of the biggest issues in the onsites of Garns are identity theft and about 96% of all DeepFakes are porn. That is celebrity pornographic videos but for example also revenge porn and this created a whole sub-genre of porn. So it's mainly used for against actresses and actresses but also you and me could be a victim of this process and for example political opponents. So while I was researching articles about DeepFake porn and so-called DeepNudes I found this terrible article reviewing the best DeepNudes apps of 2020 which I tried to report. So let's hope that is getting removed at least. And in 2018 people tried to silence Achana Ayup who is a Muslim and investigative journalist from India. Her social media account got infiltrated with fake posts and fake porn such that she wasn't accepted at any Indian publisher anymore and she couldn't leave her house for quite a while. A last problem that I wanted to mention is the tempering with medical imagery. So it starts to spread to other domains as well. So researchers have shown that you can inject or remove a tumor on an image of a 3D CT scan of a lung that fools medical professionals as well as detection software. And then there are many things that we're probably even not thinking of yet that the scan and DeepFakes could influence and take over. And I would say why is that a problem? Only people who have a lot of commuting power or a lot of knowledge about these things can create DeepFakes. But that's not true. Basically everybody can create DeepFakes. It is important for you to know that everybody can make DeepFakes now. You can turn your head around. Mouth movements are looking great and eye movements are also translated into the target footage. And of course as we always say, two more papers down the line and it will be even better and cheaper than this. So now I've mentioned all the dark and negative sides of GANs. But what shall we actually do? What can you and me do against the downside of GANs? So when we talk about bias, especially as a researcher, there are several things that you can do. So you need to try to balance your data sets. And you can do that with a variety of things. You can try to put more variation of collection methods. You have to have a high diversity of people labeling your data. And you also need diversity of where your data is collected. So here at the image at the bottom, you see on the left side ImageNet, which is a very popular data set with labeled images, but more than half the data was collected in the USA and in Great Britain. So it's not very representative of the world, but it is used in all kinds of tasks. And it's also usually the case that the dominant culture is often higher represented in a data set or even reflect correctness when it's put into an algorithm. On the other side, you can try to balance your algorithm. That can be done by checking losses and weights, et cetera, but also the people who are coding influence the bias of an algorithm. So there was an example with a soap dispenser where a group of people with white skin color developed a soap dispenser that would react to your hand pulling onto the soap dispenser, and it would not react afterwards to people with a darker skin color. So your bias is also reflected by the people who are coding. So to sum up, bias can be introduced to machine learning model basically at any point where a person might have designed, engineered or touched the system. And everybody of us is biased whether they are aware of it or not. But what can we do against deepfakes? Unfortunately, not a lot because the technology is already out there and is already freely available for a lot of people. What you can do and what I can do is we can question our sources of information, our sources of images, et cetera, to detect deepfakes. And if you're sure that you detected a fake, report it, and also don't use or support gun algorithms in a harmful way. Thank you for your attention. I hope you found it interesting and learned something new. I'm looking forward to answer your questions. And don't forget whether you're programming with guns or whether you're using or consuming guns or their products. And don't forget with great power comes great responsibility. Thank you. So yeah, I think I'm back on stage. Oh, nice. So I see that there is... Oh, okay. So your talk seems to have been quite comprehensive and as nobody did leave any questions. Considering that this is the second talk on guns in a row, I think, yeah. What was really nice of you was that you covered the other side of guns. The speakers for you tried to cover or did cover a specific use case and how to implement them and you covered more fundamental concepts. So yeah, thank you very much for taking the time of preparing and giving this talk. And yeah, have fun on the remaining RC3. Thanks for virtually coming over and... Oh, yeah, one came in. One came in just now. How big is the computational cost for a discriminator? How big is the computational cost for a discriminator? For the discriminator, that really depends on how complex your guns is. If the input for your gun is very small images, then the computational cost is of course more as well. You can't... Yeah, you can't generalize that really. Okay, so thank you for taking your time. Thank you for answering this last minute question and yeah, have fun. Thank you for hosting me. It was really nice. You're absolutely welcome. Thank you for coming over.
|
Replay NO Commentary - This talk will explain what Deepfakes are, the technology behind them (GANs) and why we need to be careful when using them. Replay NO Commentary - GANs (short for "Generative Adversarial Networks") have been revolutionising the generation of images and videos since 2014. While this machine learning architecture is being used in arts, science and video games, it is also abused to steal people's identities, for example by generating fake news with putting words into politician's mouths which they never said or creating porn with faces of famous actors and actresses. In this talk I will first give a short overview over what GANs are and how they work. The second part dives into the new dark world that it opened up to us and why we need to be careful - because with great power comes great responsibility.
|
10.5446/52353 (DOI)
|
If you want to ask questions to the speaker, unfortunately, right now I cannot hear you, if you, I cannot see you if you stand in front of your microphone. Thus, I would kindly ask you to go to either on Twitter or Mastodon and use the hashtag RC3R3S or go to Hackend in the IRC on the channel RC3-R3S, all our numbers and letters. Also, we are streaming on Twitch and YouTube. You can search for our streams by using the remote Reinhardt stage. Use it one word or three words, whatever you like. You will probably find it. So, to go on from here about our speaker, he is studying industrial design in Eithoven and part of his thesis is this talk and he wants to eliminate privacy from the aspects both of design and development as he is both a designer and a developer. And this talk was presented also at the Dutch Design Week, so give it up for Lai. Right, thank you so much. Let me get right to it. So, hi there, as I said, I'm Lai, I'm currently presenting from this man cave in Eithoven, the Netherlands, software engineer by trades, designer by education. And some of you might wonder what does that mean? Well, it means that I have an interest in a couple of things in the overlap between both of those fields. So, there's privacy, there's personal information, and there's also user experiences. And simultaneously, it means I can cherry pick the aspects I like from both fields while also blatantly ignoring all the same practices that have been set in both fields. That will be a recurring theme in this talk. So, let's talk about this personal information. I want to show a story of my personal information as well and taking control of it specifically. We'll be talking about personal information a lot. So, let's consider what personal information is for a brief minute. By law, personal information is any information that is related to you as an individual. It's sort of infectious. It's as if your hand that just touched the elevator button is information. And as soon as you touch your face, it's infected. Link to your name, it's personal information. Datapoint link to your IP, it's personal information. Link to a hash bank account number, it's personal information. It can be connected to you in any way it's considered personal information by law. And what's more to know about personal information on 2020? So, we know that governments care little for personal information at this point. We know private corporations care little for personal information. Unless we pay for it, that is. In fact, we know our personal information is actively being used to manipulate us. And consequently, we know that about six out of 10 Europeans worry about a lack of control over their personal information. And here, I'll just assume that the remaining four out of 10 haven't been paying attention. So, where do you even start? It's very easy to feel sort of overwhelmed by this knowledge that we just ignored. We get into sort of Stockholm syndrome, where we still dive into the huge bowl or the infinite Facebook school with a, this is my life now attitude. What did you do last night? Well, I went in the six hour bender last night, watching Kim Kardashian's wedding, how the earth is flat, and the lead string baby blood in satanic rituals. It was really inspiring. Well, that's a joke, of course, but it says enough about 2020 that this is apparently closer to reality than the self-lacing shoes and flying cars we were promised. So, what can I do? We've got all these issues. How can I make a difference? And that is how can we actually do what can we actually do besides becoming digital ermates and casting our devices into the fire? And we as hackers, engineers and privacy junkies answer this question often somewhat condescending. Like, oh, sure, it's easy. Just deliver social media accounts, block some ads and trackers, join the fattyverse, petition your members of parliaments, maybe write some new antitrust legislation yourself, prosecute the rich elites profiting over data, drive a fan filled with magnitude of Facebook data center and walk bare through the process of lava to cleanse your iPhone in undo. I think a better question here is, what can we feasibly do as individuals citizens? What can we do to sort of skirt the life of appending decisions while still making a tangible impact when it comes to the average citizens privacy? And let me do some cheerleading here for specifically the GDPR. I personally think it's a fantastic set of laws that solve some of these problems. And something my dear disagree with me here, like, is it a perfect law? No, sure. Were some of the concepts prior law as well? Yes, of course, it were. Has it eradicated big tech power, fears of the human condition, and broad world peace? Not really. But remember that individual aspect of doing something, anything. This is where the GDPR does make a difference. So let's impact this whole thing that we call the GDPR. Not the entire thing, of course, but let's cherry pick precisely the aspects that help me play to its savior complex strengths. Fundamentally, the GDPR is about consent and transparency. So first, there's consent. The idea that you get to say about what happens with your data. And I'm going to skip any further discussion here as the state of it is depressing enough as is. So anyone else can go and clean up that mess. Rather, we'll talk about transparency. Transparency is the notion that organizations that process your personal information provide truthful information of what they collect and how they process it. Just like in Christopher Nolan's latest movie, transparency can move forwards and backwards. Forwards transparency are the data processing registers and the consent notices. This is what we're going to do to your precious data. Backwards transparency is the look at what you've made me do with personal information. Fortunately, we have a more sexy sounding name for what we can do here. And it's called data rights. So during the GDPR making process, the lawmakers had some fun to felt generous like Oprah. And as a result, we ended up kind of coincidentally with a set of rights related to our data. There's a right to access your data. There's the right to rectify data that's incorrect. There's right to erase data if you don't want to have it there. There's right to restrict certain data if you want to, certain data processing. There's right to notification of what it is processed. You have the right to take the data along with you. And you also have the right to object to data processing practices of your personal information. And these things are pretty powerful. All EU citizens get to enjoy them. And they get to exercise them with whomever is processing their personal information. You can basically go up to any organization and say, this is a robbery. I want my data. And they have to comply. Not complying is expensive. Fines are up to either 10 million euros or 2% of global turnover, whichever is more. So not less. That's ridiculous. So when I found this out, I felt like quite the hacker that was going to be, I'm going to go out, I'm going to retrieve all my data and there's no one to stop me. So I did. I actually went out to about 59 organizations to which I sent data requests. And this was all kinds of companies. It's, we send them to big tech, I send them to insurers, banks, dentists, doctors, bakers, hairdressers, public transport companies, basically any company that is in on this whole digital transformation narrative that seems popular today. And this is what that looked like. So this is an actual request in legal mumbo jumbo that allows you to gather your data. And I want to shout out to my data done right. That's a bits of freedom initiative, initiative among others that helped me generate this mumbo jumbo quite easily and send it out myself. And I sent most of those by email, as that was the sort of standard for these kind of things. But for some of them, that didn't pan out as smoothly as I wanted it to. So for some, I actually had to go and print them out, leave my house and put them in a physical mailbox in 2020. I'm not even kidding. In one case, I had to actually physically go over to one organization's headquarters to sign a form and elaborate in person what I was actually doing and why I wanted to do it. And breaking some character here, I actually have to hand it to big tech regarding the amount of engineering hours they've invested into making requesting data. I have decent experience. They've got this particular thing figured out. So when we're talking about the Apple, Facebook, Spotify, Instagram linked in data request platforms, they're actually not that bad. They're the best out of the bunch. But it's the only compliment that I will be giving those kinds of companies during this talk. Because fortunately, those practices were contrasted by almost everyone else doing the worst possible job. At the 30-day mark, which is the legal limit for responding to data requests, about 40% of requests was still unanswered. And still about right now, so I did those in March probably, nine months later, 20% of those requests still remain unanswered. And that's painful to me. And it doesn't even include all the back and forth emailing I had to do, the reminders that I actually had requested data, that it's been 30 days, and how almost everyone asked me to send a copy of my passport in plain text email. It was just not great. The experience wasn't great. But then we get to go over the actual responses. And like, I want to go to my favorite one first. And this is where a bank send over a mail career to my house, which they announced about a week in advance, who asked for my passport, then checked it, made a copy of it, took that along with him, and then handed me this USB stick. And if it's laying around right here, it's a fun piece of memorabilia. This USB stick contained the data that they sent back to me. And even though I like data being physical a lot as a designer, that must have been a ridiculously expensive operation for them to do, especially as more people start asking for their data. And then on the complete other under the spectrum, I sent out a request to the Dutch tax and revenue service, which returned to me this six page middle finger, basically flat out projecting my request, unless I made the request very detailed and very specific. And while I expected some corporate backlash, I must admit that it was kind of caught off guard by the whole European government agency being completely hostile to any notion of user data rights. It was kind of off putting. But then we get to the actual data that I got back. So this is a tiny piece of what I received. I received over 2,200 files covering all parts of the data spectrum. So of course, there's CSV files, there's JSON files, there's XML files. But more often than not, I encountered Excel files, HTML files, JPEG files, screenshots, PDFs, text files. In some cases, I received data via the mail. So I had to either scan it in myself and then get all the data in. And you can go on and on and on. There's a ridiculous amount of returned types of data in here. And while I read JSON fluently, there's also the point of the massive influx of data that made it very hard to grasp what it actually meant to me, all the data that I retrieved. And I'm not alone here. A couple of scholars went out and requested all of their data in a similar way. One research actually ended up getting access to his colleague's data by just proofing their email address. There was no authentication or check whatsoever. The secretary basically just assumed that the email was valid and sent the whole dump of someone's personal info over to a complete stranger. And they also found that passwords are regularly sent back and forth using just plain text email, particularly Wong and Henderson found that over half of the responses they got to their data request did not comply with the machine readability standards that are set forward by the GDPR. But if there's any common ground between all of them and myself, is that the process was exhausting, frustrating, and ridiculously slow. In fact, it was so poor that I haven't considered using any of my data rights ever since those experiences. It was horrible. So that's why I wanted to myself, can't we do better? And I mean this in an end to end sense. So come into the whole process from regressing your data to getting it to viewing it to storing it to actually doing something with it getting insights, all of those things. So that's why I built Eon. And Eon is pretty simple. It's a desktop application that does exactly this end of end to end stuff of personal information. So there's the request, there's the archiving, there's the getting insights. And I'll brief you walk you through how that works for a regular user. So first of all, there's this account overview where you add all of your online accounts. So right now you can address Spotify accounts, Facebook accounts, Instagram accounts, your LinkedIn accounts. There are ways of adding other accounts, but I'll get into them later. And as soon as you've added an account, you can start a data request for it. And that basically means that you get a window where you enter your credentials. And then Eon will do all the clicks that are necessary for you. And that's it. Your data has been requested. The only thing you have to do is wait for it to complete. And then Eon will let you know like, Hey, it's been a couple of days, your data request is complete. It's in now, let's have a look. And when the data request does complete, it just pulls in that data automatically, and it stores it safely on your local disk. And this is where you have the opportunity of actually inspecting it. So you can see a small hint of that in the right bottom corner, where there's an overview of all the data points that came in from a particular data request. But there's better ways of looking at it. So seeing your data chronologically is one option. There's also a categorical overview where you can just see the different categories of data, but you can also view all your data as a graph. And here's where you can more easily inspect what's happening. So you can see the data types, where the data came from, which account, which platform, and the individual data points as well. And I'll give you a demo, a demo of that shortly. And then you can actually inspect single data points, individual data points. This is specifically an ad interest data point that was in the LinkedIn dump. And once you have that concept of a single data point going, you can actually bring in that right to rectify that we talked about in the beginning. So if you have the data, you can actually say that this is not a data point that I want you to have and object to it basically. And in Eon, you can select a bunch of those data points that you would like to see deleted. And then Eon will help you generate an email that will ask the provider in the again, legal mumbo jumbo, that's right for these kinds of requests, to delete those specific data points. You just open it up in your email clients and send it off. That's basically it. So before I show you that quick demo, I want to introduce you to Olaf. And Olaf has grown quite close to me over the past half year. I've learned that Olaf likes Formula One, he likes football. He's actually a junior football coach over in his birthplace of Feldhofen. And we've grown so close in fact that he felt comfortable that I gather his data and display it publicly for all of you to see. I must admit though that Olaf had little choice in the matter because just like trickle down economics, Olaf isn't real. I fabricated him as an alter ego for me and a set of study participants to work with during the Eon development. But you get to learn everything about him in this short demo. So let me move quickly out of this presentation and go over to the next screen where you'll see the actual Eon application going. So here's this timeline overview where you get to view all of the recent data requests that came in. So specifically there's tiny ones for Instagram where apparently a couple of ad interests got deleted. You can see for instance if you think those ad interests are very interesting, you can browse by them one by one and see where they're coming from. By and large they're coming from LinkedIn in this case. And this is that graph overview for Olaf specifically. So here you can see the Facebook, LinkedIn, Spotify and Instagram platforms and how those data types are related to them. So for instance LinkedIn and Facebook both have extensive place of residence type data points for Olaf. So the Olaf I came up with basically. And when you go out and click a specific data point you can just delete it and then find the generated email quite easily over here. So that's it for the short demo. Let me come back to here. And then start answering the hard questions. So now that you've seen everything, your original question should be how does this all work? And remember that thing I said about disregarding sane engineering practices. Well if you are an engineer that really cares about native software practices, this is probably the moment where you want to step out and mute the stream for a couple of minutes because I will not only promote, defend and actively encourage practices that will by those people will probably be decried as heretical. So while we wait for them to leave, I'll reveal that basically for Eon everything is electron. It's TypeScript. So that's JavaScript almost all the way down. And for those not in the know, electron basically packages the web browser Chrome into a desktop application. While it's not a new idea, electron in my opinion is the first mature attempt at doing so. And it's consequently used by a lot of applications that you use on a daily basis. So think Microsoft teams, for instance, which we have have been locked into for the last couple for this year, this last year. And there are a couple of reasons for using electron in this project specifically. And I'll tell you about them. And I tell you about them. So first of all, there's the electron browser APIs. So we do a lot in the background to make sure that the user doesn't have to do anything. And as all the platforms are basically front ends only, so they don't expose any API's. We just default to making clicks on behalf of the user. So we open up this window, the user enters their credentials. And then basically we just use this browser window to make clicks for them. So we click the specific button sequences and pages to get that data request going and actually download it in the end. And this means we don't have to do any password storage magic whatsoever. You can just rely on native browsers. And that means we use existing flows without complicating stuff for ourselves. And then there's the developer experience and the prototype ability of an electron application. So on the right here, you see a base implementation of Instagram provider. So this is basically the code that does all the clicks and pulls in the data. And the whole thing that does all of it is about 200 lines, most of which is boilerplate. And since it's plain old type script, a lot of people can get involved quite easily. The bar threshold is very, very low. And then again, the application runs at whatever platform you can throw at it. So it goes for Windows, MacOS, whether it's Intel, Apple Silicon, it can do Raspberry Pi, it can probably even do your homecook PSD distro if you wanted to. There's no platform specific code in Eon yet. So all the platformers benefit from the same changes immediately. And then the last one, which is probably the one I will be crucified for, but I'm sticking up for it. The web has superior cross-platform user experience. There's the rich DOM, there's react parents that make creating a recognizable, accessible UI from scratch exceedingly easy. For the Rockbase view, I just used side escape.js to prototype it in literally a couple hours. You can't beat it with native stuff. Everything's modular. You build a data retriever on the left side, about 200 lines. And then you can just use a JSON defined schema to pick out the data points from all the returned files. And the data types that are associated with them. And because all the data is local, as time moves on, the schemas get better. And Eon is able to show you more of the data that you already have. Lastly, E-Mills modular too. So we've currently got Gmail integration that actually reads out some email for you. But we can use this to send out email as well. So if an organization isn't already covered by Eon, we can basically just send out an automated email to them. And when they don't reply, start spamming them with reminders. This works for data removal, of data removal requests as well. So if you want to delete data, we can just automate the process as well. And we can make all of that a lot more inclusive. Last but not least, where does all the data go? It's basically a local Git for post-toring. So we use native libkit too. Not only doesn't make storing subsequent data requests super efficient in terms of storage capacity, it also makes it really easy to div the changes between various states of what is essentially your identity. Everything is open source. So you can go over to Eon.technology to get started with it. There's some docs there. And we're also on GitHub. Contributions are warmly welcomed. If you want to take Eon for a spin, that is no your feedback as well. So GitHub issues is definitely open for that. Or if you want to help out, then come in to GitHub and we'll figure something out. And one thing I wanted to highlight as well, while Eon has the potential of greatly increasing what little user experience there is currently in data rights, when it comes to data rights, it takes two to tango. So there's you and the organization that you're whipping into actually retrieving your data for you. And given that, I wonder whether we can make a similar leap forward for organizations as well, as this will massively increase the user experience for a regular user. So this is where the Open Data Rights API comes in. The premise is very simple. Every organization exposes a single endpoint for user exercisable data rights. Third-party applications can then implement that endpoint and do data rights work on behalf of their users. This could be Eon, but could just as well be any other frontend. It doesn't really matter as long as there is the single point of entry. And this makes all the frontends for exercising data rights much better in the end. This is sort of double whammy. Eon makes it easier to get a complete picture of your data, while organizations can rely on the already existing frontend for all their data rights stuff. There's no need to homebrew it as an organization. So the first proposal for that is already available. So it's in api.opendata rights.org. And I encourage you to go have a look and comment on it. All of that stuff is based on open source and well-known and implemented standards as well. So there's OAuth for authentication and there's schema.org for data typing stuff. A demo implementation of the open data rights API is available on demo.opendata rights.org. It just as well as the Eon implementation of it. So if you want to take it for a spin, you can just plop that URL into Eon and it will actually let you pull in some fake data. I was supposed to show you a demo but in a few of time. I will shortly skip it. All that stuff is open source and available as well. So either on white paper.opendata rights.org or on GitHub specifically. Contributions are again welcomed. So come and have a look if you're interested in that sort of stuff. So with all of this work having been presented, it's probably time to come to a final conclusion. And I would like to propose a following. Like I want to start out with the definition that Matthias von Kahn and I wrote about a year ago about how the concept of privacy and user experience are intertwined. Making privacy work means getting the details right for a wide range of users. I believe getting it right makes a difference between control over data being technically in place versus actual meaningful control. So when it comes to making privacy work, we need to negotiate design, technology and legislation very well. Let's bring those forces closer together in the future. And in that vein, also let's consider Eon as basically an SDK for data rights but now incorporating user experiences and a basis for compliance. If we get that balance right, organizations and citizens stand the game. So in that privacy vein, we could apply this better and even more broadly. Reusable modules that get the technology, legislation and user experience just right when it needs to be. If you can't think of a place where this kind of standardization is more needed, it's probably time to think again and it won't delve any deeper into that. That's it. Thank you very much for listening. Thanks to all the people that have made this journey possible. Those at the Eindhoven University of Technology at SERV at Bülow-Mürtt-Geddingen and also a shout out to all the RCA3 volunteers making all of this stuff work during their Christmas holidays, especially at this horrible times, particularly the folks over at the remote Rheingruwen stage in Monheim for providing with this stage. Cheers and thank you. Thank you very much for this excellent talk. I was muted apparently. Yeah, no worries. So yeah, there are... Let's go straight to the questions. I really like the talk by the way and the really awesome project. I will probably personally look at it. So the first question is in terms of the GDPR asked from FF. What exactly is machine readability? Is it digital, AOCR compatible font or something else? Yeah, and that's quite a difficult thing. So the GDPR does have some guidelines for it, but I don't think it goes a lot further than machine readability. And this is where the law and technology are kind of off from each other. So in particular that's papers, so that's Wong and Henderson if I'm right. Data file machine readable as actually being able to get data points in a JSON CSV like format. And yeah, that didn't pan out greatly for them. So at least what I expect is if there's data in sort of structured format that I receive in a structured format rather than printed out on a piece of paper and that I have to enter it into Excel or whatever data phase myself, I would say that that's the low bar. Thank you very much for that. Over in the RC, Irgenpwea61 asks, if your project, if EON is open for contribution, is there already a standardized guide for implementing a new online services for data requests? Yes, so there's docs available. So if you go over to docs.eon.technology, I've made a short guide into like how this process sort of works. So basically, I call it a provider. So that's the piece of code that gets the data actually is a standardized class with a couple of methods that do that kind of work. And the fortunate thing is like the available, the examples from Facebook, Spotify, Instagram, and all that sort of stuff are available. So you can basically just model it on that, try it out locally. And then if it works for you, contribute it in a pull request, I will be very happy to take any of those. Okay, let's hope that they contribute to your project and add more possibilities to stick it to the big tech, as you did put it. So there is a new question from the IRC. What is in development for the future of EON? So what are the prospects? What are you looking forward to? Yeah, so like I can look at it very blindly in terms of features. So I think for EON there's still of course lots to do. So better automation, making it easier to get to these organizations that haven't yet automated some aspects of doing data requests. So that's where that email stuff comes in, getting some extra services in, getting it a bit more user friendly. I've spoken to a lot of people who've used it who for instance like some more contexts, like I have this huge amount of data like will you tell me what I should particularly be looking at? That's probably an interesting one. But more broadly speaking, I think specifically the Open Data Rights API has some promise at looking at the industry from a bit of a larger perspective. So I would love to see if we can implement that somewhere, basically anywhere, and take it further from that. I think the open source community can be very helpful in that regard. I would love to see some standard established, such as the Open Data Rights API to make all of that stuff just a little bit easier. So that's what I want to be working towards. Sure, sounds like a good way to go. And since we don't have any more questions, so reminder, if you want to ask your questions, you can either tweet them or toot them at hashtag RC3R3S or go to our RC on Hackand RC3-R3S. So one question I have, you seem to have gone through quite the adventure by requesting your own data. What kind of data do you request from a bakery? Yeah, that's the funny thing, right? So of course, the doctor and the dentist sort of make sense. I think for me, a specific one was my hairdressers, actually. And yeah, like you wouldn't expect them to have any data. But nowadays, like all of those small little retail shops have CRM systems. So if you do some business with them on the regular, you're probably in one of those systems and there's probably data collected about when your appointments are or your email address or they, which is very helpful, by the way, they sent me actual meeting requests via email. I love that feature, but that requires them to store some data as well. So I was just curious why they would get back, but I didn't manage to get through them. So I never got to find out. That's the pity. But like in this day and age, I don't think there are lots of companies left that don't store any personal information. Yeah, sure. I mean, even if it's the hairdresser that notes your number down when you make the appointment, right? Yeah, exactly. Even that's personal data. So there's also another question again from Egenvea61. Will there be a one button to send every company a please delete all my data requests? That could be. Like, do you want it? That's the question. I mean, since they're asking so many questions, I think that they might contribute it. It would be really interesting to classify the data, maybe, to say I want every tracking data deleted, but like my personal data, like my name and so on, you can keep that. Yeah, that would be also interesting. Also, there is a question from our moderator or from IRC. I don't know. Somebody called Mod. Anyway, what kind of data do you request from a bank? Oh, from a bank. Oh, I need to dig deep to get into actually. So it's this USB stick, which I used for Tiki, which is that service for doing basically peer-to-peer small scale paybacks to friends and whatever. Can't recall specifically what data I got back, but there's lots of data that banks gather and like your transaction details will be the least of you worry, probably, because like banks also do your insurance. So they collect information probably on your age, your health, your occupation, all that sort of stuff that makes it easier for them to tailor their prices, products, etc. to your kind of stuff. Okay, so our time is running short and we have one final question from WebUser238. Are there any common traits about the 20% companies that haven't replied? Size, sector, etc.? Yeah, so as I said, this is one of the areas that the big tech companies do have everything put together. So I got space requests from every one of them except Facebook. I don't have a count there anymore, and they claim they didn't have any data, but there's no way for me to figure out. So that's also still a problem. And then, yeah, like the non requests, like I mentioned, my hairdressers, I just didn't get to the right person in time, and I didn't have the time and effort to actually dig down to that rapid hole. But it's usually like the smaller end of the spectrum, as the larger companies do have some fear for the fines they might find themselves getting. So at least there's some sort of compliance department over there. Okay, so I think that's it. Thank you very, very much for your talk. I'm sure people will find you on the interwebs to communicate you. Do pull requests, you hurt him, people, go and make this thing a standard. And again, thank you very much. And yeah, off to back to the break. Bye bye. Cheers. Thanks.
|
In a time and space where even basic human interaction has to be facilitated by computer systems, we find ourselves in a web of systems that aggregate data in places we are unaware of. The vast scope of surveillance capitalism makes us yearn for protest and disruption. Yet, while fighting the power is a worthwhile cause, we suggest a complementary approach in wining and dining the power, and making bureaucracy do the dirty work for you. Strap in for a ride where asserting your God-given data rights isn’t only your duty as citizen, but easy, accessible and fun. The right to access, right to rectification, right to erasure, right to restriction of processing, right of notification of processing, right to data portability and right to objects are just some of the rights EU citizens enjoy since May 25th 2018, whether the website is hosted and operated in the EU or not. These rights should make it easy for citizens to get a grip on their personal information. We’ve taken the law into our own hands, and found out what it’s like excercising these rights. Spoiler alert: not good. We’ll walk you through the 58 requests we sent out, and the hilarious and dumbfounding ways they are set up currently. In addition to ventilating our frustration into the void, we make the case for automated data rights: retrieving, editing and removing your data should be as easy a changing your Facebook status. This vision is made real in the form of Aeon: a desktop app that gathers, visualises and allows for modification of your data.
|
10.5446/52360 (DOI)
|
Ja, willkommen zurück ihr Lieben auf der R3S. Welcome back to R3S Stage. The next talk deals with better justification for the web. Why is this important? Because when you type text justification on a website into a search engine and you look at the top results, the general advice is to say no, don't do it. This is quite hard to believe as the amount of websites is still growing every day on one hand. On the other hand websites are also becoming more and more sophisticated and contain more functions. Johannes, our speaker today is a communication designer and his website, I checked it, is built with WordPress. However, unlike many other pages, his page doesn't look bad at all. So maybe because he simply went beyond typing text align and then hoped for the best. In his talk he will shed some light from a designer's perspective. Why justify text in a web browser is way behind the quality of justified text in professional DTP software. Even if you're neither a designer nor a typography nerd, stay here because Johannes will have some valuable tips and tricks for you how to fix the issue with text justification. Johannes, thanks for joining, welcome and the stage is yours. Thank you very much for the opportunity to talk at this unique event. My name is Johannes Ammon, I'm a graphic designer and typography nerd from Germany. I'm living in the city of Mainz, where Johannes Gutenberg started a worldwide media revolution about 500 years ago. In this talk I'm going to explain how we can learn from him when dealing with justified text on the web. So let's start. So probably every one of you made this experience once in a while. You have a couple of text columns and you want to align it on both sides so it looks nice and clean. And what do you do? You just type in text align justify and boom, it looks like shit. You are presented with this kind of mess. There are huge white spaces inside of the columns and it looks not very professional. And it's not even just an aesthetic issue. Readability is also affected in a very bad way. So this is not good. Actually, justification on the web is so bad, that professional designers will always advise you to not use it at all. So this is the current state in 2020 to not use it at all. And even though there would be many benefits of justified text, I can imagine. Just for example modular layouts or better use of the given space in smartphone sizes, flexible designs. And it just looks nice to have a straight right margin. So I think it's a little bit sad, that we cannot make use of all the achievements of digital typesetting in the web. So, there are several reasons for the bad justification quality on the web. First of all, there is no way of doing manual adjustments like custom line breaks or something like that. For example in print design you can do this in the end after the type setting on the web in a highly responsive environment. This just makes no sense. You could hardcode something into the text block, but it probably makes no sense. So this is an important tool you don't have in type setting on the web. And we have to find other solutions. Second reason is hyphenation. There was no proper hyphenation for a very long time on the web. It's just a few years since the browser support of hyphen's auto was not really good. And still automatic hyphenation works just with a few privileged languages. So this was a really big issue for a long time. Third reason is browsers don't make use of advanced line breaking algorithms that are pretty common in professional desktop publishing software like Adobe InDesign for decades now. So again, print is way ahead in typographic quality. So, I did a lot of research on the history of justification both in print and digital design. There were several points in digital history where we had great concepts and working solutions for beautiful justification, but none of them eventually made it to the web. So I asked myself, why is that? How could we improve justification on the web? I want to present four different approaches. First, I will talk about the current state of hyphenation. Then I will explain the crucial topic of line breaking and why advanced algorithms are not implemented yet. Then I will pitch some ideas for something I call soft justification. And last but not least, the fun stuff, how to improve justification via variable fonts and open type ligatures. So, let's go back to our paragraph. The first thing you will do to improve the appearance of these text blocks is to add a line with hyphen's auto. And you see, it will improve your text significantly, but there are still a lot of white holes. And if you look closely, the hyphenations are pretty uncommon. Since hyphenation is highly dependent on language rules built into the browser, you need to set a language for the document first or in this case for the element. So, you need to go into your markup and you need to add the language tag. And you see, the hyphenation is changing again. So, after adding proper hyphenation, we have a way better typeset, even though there is a lot of irregular white space inside the columns, so not satisfying. Well, hyphen's auto was a huge step forward. Nevertheless, I would like to have way more control over some details such as the number of consecutive lines with hyphen's or the ability to prevent the hyphenation of the last line in a paragraph. So, luckily, there is some news on the horizon. The working draft of CSS Text Module Level 4 is mentioning four additional properties. The most exciting from my point of view is hyphenate limit zone. It gives you the ability of defining a maximum amount of unfilled space that may be left before hyphenation even is triggered. So, this will reduce the number of hyphenations significantly in those cases where it's not really a benefit. So, this is great stuff. I'm looking forward to this. Hyphenation is done, at least for now. We made some good progress in the last years. Let's look into the other topics, which are maybe more relevant for now. To understand the benefits of advanced justification algorithms, we first have to understand the basics of linebreaking. Hier haben wir einen Text-Kollum. Die Algorithm beginnt mit dem, dass alle Charakteren nebenbei zu uns liegen. Wir gehen ein Stück nach ein Stück, bis der Ende der Lange ist erreicht. Und der Worte ist nicht mehr fit. Dann geht es zurück, bis der nearest breakpoint, in diesem Fall ein Spass-Charakter, erlebt alles nach dem Breakpoint und verwendet die Langebreche. In der nächsten Lange wird das Type-Setting als übergelegt. Charakter über Charakter, Langebreche und so weiter. Wir haben eine Langebreche, die nicht so gut geliebt sind. Um etwas mehr zu verhindern, schauen wir, was Donald Knuth und Michael Plas in ihrem famosen Papier beschreiben, die Paragraphen in Langebrechen. In dem Anfang des Papiers, wo sie ausgesprochen haben, dass für Computer es kein Problem ist, die Langebreche in Langebrechen zu beinhalten, aber das größte Problem ist, dass sie die breakpoints in Langebrechen zu beinhalten. Und das ist genau, was der Algorithm macht. Sie finden die optimalen Breakpoints aus einem gewissen Paragraph. Also, was der Algorithm macht, ist, zu schauen, wie die ganze Paragraph, und zu überlegen, wie die Langebrechen und die beste Lösung finden. Es betrifft, wenn ich diese Worte in der ersten Langebreche verabschieden werde, was die nächsten Langebrechen für die Beine haben und ob ich die letzten Langebrechen in einem größeren Problem haben. In dem Ende dieses Prozesses gibt es eine beste Lösung für dieses Paragraph, um die Langebrechen zu beinhalten. Und das ist, wie es sich in Software wie Adobe InDesign gefunden hat, wenn du die Justifikationssetzungen und ein Paragraphkomposer aktiviert, in Bezug auf eine Einzelne Langebreche, die Software wird die ganze Paragraph und eine beste Lösung für deine Justifikation verabschieden. Bei 2020, niemand der größeren Browser noch ein weiterer Langebrechen-Aggen, wie der Knuth-Plus-Aggen. Also, warum ist das? Zuerst sagen sie, dass es eine Performance ist. Dieser Algorithm kann viel mehr als ein einfacher Algorithm, es ist quadrativ in Paragraph-Langebrechen, also die Probleme mit sehr langen Files und langen Paragraphen. Ich habe Bremen Stein über das gefragt, der übrigens einen großen Gespräch auf dieses Thema an der Robothon-Konferenz im Jahr 2018 gemacht hat. Und er sagt, dass die Performance nicht ein sehr starker Argument ist. Der Prozess für die Power wurde in den letzten 10 Jahren viel besser gemacht und es gibt einige sehr schnell und sehr generale Zeitrechtspläne des Algorithms. Er denkt, dass es etwas mehr als eine Unwillinge ist, um etwas so grundsätzlich zu einer Browser-Layout-Aggen zu ändern. Und es ist wirklich schwierig für die Browser-Aggen-Mäher, um das Veränderung zu machen. So, glücklich ist es auch eine Hoffnung auf dem Horizont. Als ich die CSS Textmodule Nr. 4 aufrechnen habe, habe ich eine Probe mit einem schönen Namen, Textwrap, Priti, und es spezifiziert, dass der User-Agent für eine bessere Layout-Aggen-Mäher über die Spezie zu kaufen und es ist erwartet, viele Läume zu considerieren, wenn man die Brake-Decisionen macht. Das ist wirklich sehr spannend. Die Abenteuer-Läumbrechungen sind nicht hier noch, aber vielleicht gibt es eine Chance, sie in der Zukunft in einem Browser zu bauen. Ich würde gerne sehen, dass es passiert. Das nächste Thema ist etwas, das ich genannt habe, die Suche zu justification. Wenn du die Designers fragen, über die bestmögliche Texte in der Geschichte, dann referiert sie alle zu Johannes Gutenberg. Aber wenn du ihn closer in die Bibel schaust, dann sehen wir, dass der richtige Margin ist alles, aber strahlend. Vielmehr ist es mehr wie ein optischer Margin und manchmal sind die Läume ausgeladen. Eines der Finden meiner Forschung war, dass Gutenberg eigentlich seine Arbeitern zu den Spaces in den Läumen zu betrachten, als eine strahlende Margin. Die generelle Appearanz des Types des Johannes Gutenbergs, dass es so elegant sieht, ist mehr wie ein Problem von Spaces in den Läumen, als eine sehr strahlende Margin. Unsere Algorithmen heute haben eine sehr beinahe Art, um das zu handeln. Wenn der Wirt nur 1 Pixel länger ist als der Läumen, dann wird der Wirt gebrochen. Und es gibt keine Toleranz, wie in Gutenbergs Bibel. In meiner Meinung, die Spaces zwischen den Wirten sind viel mehr wichtig als eine sehr stichtige Art. Ich würde die Implementation der Toleranz-Sone im Läumenbrechern-Aggenarithm, wie in der Hyphenation Zone, die ich vorhin showed, so ist das etwas sehr spannend. Ich würde gerne mehr Kontrolle über die Prioritäten in Läumenbrechern, damit Webdesigners können, wenn sie einen strahlenden Margin prioritieren, oder wenn sie weniger Hyphenationen wollen. Und generell, bin ich ein Fan der weniger beinahe, mehr menschlichen Anlegerungen von Algorithmen, also muss es nicht immer 1 oder 0 sein. Softjustification ist ein Wort, ein Freund von mir, der es macht. Es ist nicht wirklich existent, aber vielleicht finden mir ein paar meiner Ideen in einem Webstandards für bessere Typografie. Wer weiß? Das letzte Thema, was ich hier mit dem Fundsverband möchte, ist, wie sich die Justifikation auf der Web mit Variable-Fonds und Open-Type-Ligatures verbessern. Wenn wir nach Johannes Gutenberg zurückgehen und auf seine Bibel schauen, sehen wir, dass er ein bisschen schmutzig war. Er hatte nur 26 verschiedene Läden, aber in der Tat hatte er eine total von 290 Läden. Einiges von Kombinations- und Variationsen, wie Ligatures oder den gleichen Läden ohne die Serif, das er erlaubt, ein paar Worte in eine Länge zu schmutzigen oder um sie mehr weit zu machen, in der Länge zu gehen. Er hatte also den perfekten Tool, um die Länge in der Ende zu erreichen. Das inspireierte ein paar Pironniers aus digitalen Typesettingen in den 90er-Jahren um Lösungen zu entwickeln, die sich mit der Art der Typesettinge und der Extendung der Typen zu schmutzigen um die Länge besser zu schmutzigen und für einen besseren Typesetting in general zu machen. Die Technologie von Gliffscaling war ziemlich weit vorhanden. Die Software von Hermann Zapf hat eigentlich die Stimme der Läden mitgebracht, also die Gräwe oder die Typen-Serie waren die gleichen. Aber leider hat diese Stimme von Gliffscaling einfach verloren. Eventuell hat Adobe die Pattern und die Technologie in Adobe InDesign verabschiedet. Ich weiß nicht, was passiert mit der Karte, aber wenn du auf Gliffscaling in Adobe InDesign aufgeholt hast, wird die Läden nur verdient, das ist warum niemand immer es benutzt. Aber du hast es guessed. Es gibt eine Lösung auf dem Horizont und es ist genannt Variable Fonts oder OpenType Variations. Das ist eine neue Technologie, die mit ein paar verschiedenen Styles von einem Typeface in einem einzelnen Foto verbindet. Typen-Designers können die verschiedenen Mastern des Types des Types und die Browser kann zwischen diesen Stufen in der Stimmung ohne Distortion verabschiedet werden. Das ist ein sehr schöner Foto. Ich denke, es ist seit 2016 schon abgeliefert. Und die Leute benutzen es. Ich denke, die meisten modernen Browser unterstützen es jetzt. Das ist ein sehr großes Schritt. Ich bin nicht der erste, mit dem Idee, einen verabschiedeten Text auf der Web auf Variable Fonts zu verbessern. In dem Fall hat Bramstein excellente Resultate, mit dem Erlebnissen und der Erlebnisse, die er in diesem sehr technischen Übersicht verabschiedet werden. Ich bin ein Typen-Designer und habe mich gefragt, wie ich diese Technologie in das beste mögliche Weg benutze. Ich habe einen Variable Font für diese Forderung spezialisiert. Und zuerst habe ich systematisch all die Latter für ihre Ability zu schrinken und zu extenden und natürlich gibt es große Unterschiede. Und ich habe mit diesem kleinen Trick enden. Mit dem Ausgleich der Variable Font ist ein bisschen unsynchronisch. Die Latter wachsen und schrinken nur in ihren eigenen Abilaten. Also siehst du, ein Ei ist nicht sehr flexibel. Aber wenn wir die Narrow-Version der W sehen, dann kann man sie sehen. Also hat es ein bisschen mehr Luft um es zu verhindern, zu schlucken. Zum Ende des Jahres bin ich die A-Version in einer komplett andersen Form mehr Flexibilität und Wichtung haben. Bevor ich zu Johannes Gutenberg etwas experimentarischer Legatures zu tun habe, das kann auch vary, wenn das eine gute Idee ist. Aber ich denke, dass es direkt auf die W Practition gepresst wird. Also, das war meine VariablesfçarfOSS imatyration von LD记 Asstätics Bruno Ich habe mich zum Ende des Jahres darüber festarem dass ich die Stattelform verwendet habe, die wirklich zu mit meinen neuen Edit Parameters. Es simuliert die Verkaufsverkaufung, seit wir uns leider nicht die Möglichkeit haben, die Real-Line-Breaking-Aggen im Browser zu holen. Lass uns einfach die Resultate zeigen. Das ist ein Type Set ohne mehr Parameters. Und jetzt habe ich High-Fanation, Variable-Fonds und meine Experimente Open-Type Legatures und wir sehen die Improvement. Schön, oder? Digital Typography wird die Zukunft der Ästhetik in Type Setting setzten. Mit allen Programmen, die heute available sind, ist es keine Ausgaben mehr für die Mediocre Typografie. Die funnye Art ist, dass Hermann Zapf das in einem Papers von 1993 ist. Die technischen Möglichkeiten sind dafür eine sehr lange Zeit. Wir müssen sie einfach benutzen. Für eine bessere Typografie auf der Web. Vielen Dank für das Lesen. Es war eine tolle Höhung für mich, zu präsentieren. Wenn ihr euch interessiert, mehr Informationen auf dieses Thema und meine Ressourcen, dann habe ich alles für euch auf meiner Website zusammengelegt. Also, checkt es aus. Danke. Johannes, wow, vielen Dank. Ich habe hier wirklich auf die Web stehen, und es ist wirklich schwer, dass es so weit hinterher ist, wie Technologie. Ich habe eine Frage für euch. Die Frage ist, ist es ein besserer Open-Source-Fonds-Editor als Fonds-Fonds? Ja, danke. Vielen Dank für das Interview. Vielen Dank für die tolle Erfolgs-Version. Ihr habt es für mich geschrieben. Ich bin sehr glücklich, dass ich hier meine Rede an diesem Punkt präsentieren kann. Danke für die Frage. Ich habe es in der Chat gesehen. Ich war darüber nachdenklich. Ich bin nicht sicher, ob es eine gute Open-Source-Alternativ ist, aber ich muss sagen, dass die Fonds-Editorien für die Software Glifts-App und Robo-Fonds, die auch excellente Communities sind, die in der Sport- und Providersprache diskutieren. Und ja, seit es eine kleine Kommunität ist, ist es immer gut, die Software-Kreatorinnen zu verabschieden. Ich kann beide von ihnen berichten. Es gibt auch einen Adobe Illustrator Plug-in, für die Leute, die einfach mit dem Design auf die Art des Types wollen. Sie wollen einfach mit der Illustrator auf eine Fonds-Fonds-Editor verabschieden. Ich habe leider keine gute Open-Source-Alternativen, aber ich kann Robo-Fonds, Glifts-App und die Illustrator Plug-in für die Beginners und die Starters verabschieden. Es ist einfach zu benutzen. Gute Arbeit. Ich bin auch sehr gespannt, wie lange es dir geht, um das kleine, sagen wir es, Gouy oder Wizard zusammen zu machen. Das ist das, was du zu Ende des Gesprächs gesehen hast. Ich muss nur klick, klick, klick, und es gibt auch alle die Hifonationen, etc. Und bitte nicht, dass du es mir gemacht hast. Nein, ich habe es nicht gemacht, weil ich nicht wirklich ein guter Koder bin, seit ich ein Designer bin und ich einfach das Koder an der Zeit habe. Es ist auch mit wirklich schlechtem Technologien gemacht. Es ist in Jake Ferry, der Zeitung, und ich habe es ein bisschen aufgemacht für diese Präsentation. Aber du siehst es, es ist nicht wirklich eine gute Technologie. Ich denke, es gibt gute Lösungen da, checken die Ressourcen aus Graham Stein, er hat sehr gute Dinge gemacht. Und ich habe es auch mit einem Designer aus der Perspektive gesehen. Ich wollte versuchen, was möglich aus der Zeitung zu zeigen. Es hat mich sehr gut gemacht. Ich kann es mir vorstellen, aber vielen Dank für die Zeit. Ich glaube, es wird ein Helden, viele viele Leute, die zumindest auf dem Platz sitzen und versuchen, ein gutes Website zu entwickeln, und die Herausforderungen zu übernehmen. Eine zweite Frage, das kam in. Was ist das Algorhythmus für eine Late-Text-Systeme, die mit auf die Subsequent-Line benutzen? Ja, die Algoritzen der Text-Line-Bringung-Systeme oder der Text-Systeme, sind wirklich im Abendgarten in diesem Spiel. Ich habe es mit Hermann Zapf und all das, was in den 90ern passiert. Und all das war für das Text-System. Ich bin nicht wirklich ein Experte in diesen Algoritzen oder in diesem Text-System. Aber ich weiß, dass sie vieles in den 90ern in der Zeitung haben, die lost wurden. Sie haben versucht, die Zahl der Algoritzen in Subsequent-Line zu verbreiten. Sie haben versucht, sie zu bringen, sogar einen Schritt weiter, um die eine Line zu verbreiten und die nächste Line zu verbreiten. Es wird wirklich sehr obvious. Sie haben versucht, diese Line in den Algoritzen zu verbreiten. Es macht einen besseren Text. Ich muss auch sagen, dass mein Live-Demo ohne einen Knuth and Plus-Algoritzen ist. Ich konnte nicht das kombinieren. Mein Demo ist nur mit Variable Fonts. Wenn du das mit einem Knuth and Plus-Algoritzen kombinierst und die Subsequent-Line, dann können wir auch bessere Resultate bekommen. Das ist ein guter Punkt. Es ist ein besserer Gestifizieren für die Web. Es ist ein fantastischer Link zwischen den Kodern, die Nürnern, die den Code lieben und die Algoritzen lieben. Und die anderen, die in Design oder Marketing oder die Menschen interessieren. Was für ein paar weitere Tipps und Tricks zu beenden. Du hast es schon gezeigt. Ich habe keine weiteren Fragen für dich, für die Zeit des Audiences. Vielen Dank. Ich habe mich wirklich gefreut, diese Intersektion zwischen Design und Code zu beobachten. Ich denke, dass es auch sehr viel Potenzial ist und eine bessere Typografie mit den technischen Abilatessen, die wir heute haben. Ich habe eine weitere Tipps. Ich habe eine Liste von vielen großen Stufen aus der Seite, die diese Intersektionen-Kode gebildet hat. Ich habe eine Liste von Ressourcen auf meinem Website. Ich habe es auf meinem Website gelinken. Ich bin nicht sicher, ob ihr es sehen könnt, aber es ist auf meinem Website, FinalType. Und ihr könnt alle anderen Projekte von den Kodern und Typografien checken. Ich denke, es ist ein wirklich interessantes Thema. Ich nehme den für diesen fantastischen Gespräch. Ein tolles Thema. Technologie-Werk, stabile Streams, stabile Internet-Konnexion. Was wollen wir denn noch? Vielen Dank. Wenn wir noch Fragen oder vielleicht Twee durch unsere Kommunikation-Channels, bekommen wir Euch an찬en und stirben uns weiter in Anklage mit dem LOUdieNsดF. Vielen Dank. Have a great day and enjoy the rest of the conference.
|
The quality of justified text in the web browser in 2020 is still way behind the quality of justified text set in professional dtp software. Why is that? In my talk I explore the reasons from a designer’s perspective. Furthermore I present different approaches for improving the quality of justified text on the web by using advanced line-breaking algorithms and variable fonts. The quality of justified text in the web browser is way behind the quality of justified text set in professional dtp software. There are several reasons for this. Some are systematic (manual line-breaking is impossible in a fluid and responsive environment), some are technical limitations: 1) hyphenation has been a problem for a very long time. 2) There is currently no implementation of advanced line breaking algorithms like the knuth-plass-algorithm in any of the browsers. Why is that? In my master thesis I explored several ways to improve justified text on the web. I achieved significant improvement by applying a javascript-implementation of the knuth-plass-algorithm (by Bram Stein) and by implementing additional parameters such as variable fonts and opentype ligatures. With my short presentation I want to raise awareness for the ongoing problems with justification on the web and demand action from the browser makers.
|
10.5446/52368 (DOI)
|
Hello and welcome to the RC3, the remote chaos experience on the R3S stage, the remote chaos experience on the R3S stage. So today I have the honor of introducing Jot-N with his talk about porting Dinox to your favorite Obscure ARM SOC. But before we go to that, we want you to tell us about things that are happening in the digital world, some things that you initiated or some things that you know about. And with that, please write us an email at newsshowatrc3.world or write us in our blog newsshow.rc3.world. Furthermore, this talk will be translated in German. This lecture will be translated into German. Furthermore, if you have questions, we want you to contact us and tell us the questions due to the infrastructure right now. I cannot see you when you're standing in front of the microphone, so you can go either to Hackint to the channel RC3-R3S or you can write us on Twitter or Mastodon. The hashtag for that would be RC3R3S. All three are numbers or if the stream happens to cut out, we also stream to Twitch and YouTube. The link can be found by searching in the corresponding platform for remote Rhein-Ruhr stage. If you are unsure how that is written, you can see the various titles of this video. So about Jot-N, I asked him what should I tell about you and I can only tell you what I know about Jot-N. He is part of the KAUS community and he certainly loves to hack some hardware. So give it up for Jot-N and his talk, Porting Linux to the Offrable of Skill ARM SoC. As he said, I'm Jot-N and the slides have a bit of delay. Sometimes I find a piece of hardware that has embedded Linux on it, which is cool because we know Linux, we can do things with Linux, but then it has a really old version of the Linux corner. This, for example, is a management controller in a Supermicro server mainboard, the thing that does IPMI and stuff, but it runs kernel version 2.6.17, which was released in 2006, 14 years ago. I'd rather have a modern kernel. So how do you go about that? First of all, you should collect basically all the information you can find about that chip you're interested in. Reference manuals, data sheets, source code that was released by the vendor, due to the license of the kernel, GNU General Public License, GPL, vendors have to publish their Linux source code, but sometimes you have to send them a request first. And sometimes you have to be really persistent with your request so you get the source code table. It's also useful to look at other chips which might be similar. For example, I searched for some of the register names in this chip and I found a data sheet with descriptions for a different chip which uses the same interrupt controller and even at Mac. Finally, you should consider writing your own documentation, even if it's just to collect all the other information sources that you found in one convenient place. The next thing to do is looking for serial ports. Most embedded devices that you can find have a serial port somewhere. It's very useful, especially in embedded devices that aren't particularly secure. You can just get a root shell if you connect a serial adapter to the right pins. They're usually not populated, so you have to search a bit. In this example, it's two pads of unpopulated resistors. It's really small, but if you use an oscilloscope, you can find the right signal and then find your serial port. You can also find a bootloader shell sometimes. The bootloader is basically the first thing that runs when you power on anything. It's responsible for basic hardware initialization, making sure that RAM is usable and also for loading the actual operating system. Bootloaders often have debug facilities in them, so you can do interesting things with them. Sometimes they have a menu of interesting commands like peak and poke and memory and load files from storage like SD cards or flash. You can even download data over network protocols like TFTP. TFTP is terrible, but it works. Usually when you have control over such a bootloader shell, you can also tell it to run your own code. That's obviously very interesting if you want to run your own Linux, but you should start simple. Write the simplest possible program that can demonstrate code execution. For example, these four ARM instructions, they just load the address of the serial port, the UART, load a character, and then repeatedly write that character to the serial port. What it does, you can see on the right side, it just screams. Once you have that down, you could basically start running Linux. It won't be useful, but it will print something. Sometimes if the bootloader doesn't have all these interesting commands, it's useful to add your own little program that can do peak and poke and maybe some scripting and running code and whatever you need. Just so you have an interactive environment to experiment with the hardware. The next step is to configure Linux. The Linux kernel has this configuration system called Kconfig, where you can configure all kinds of things. If you just want a good basic configuration that's known to work on most systems, you use make-rg-arm-multi-v7-dev-config, or that's a different one for the older architecture versions. Then you can use make-rg-arm-m-config or menu-config to configure all these options interactively. Because the dev configs usually include all kinds of drivers, you most likely want to disable some of those drivers. There's a few things that can help you in early debugging. For example, the config debug.ll option. If you enable that, you can also specify the UART base address and register with. You can have a usable serial port. There's an option for early print K, which just means you get lock messages earlier during boot. There's another option to specify a built-in kernel command line, so the bootloader doesn't have to provide a reasonable command line. Once you've configured and built Linux, you can use your bootloader to load the ZImage file into RAM and run it. ZImage is a self-extracting, compressed version of the kernel. But once you do that, you'll get something like this. It says invalid DTB or unrecognized unsupported machine ID, which basically means you need a thing called DTB. DTB means device tree binary. The device tree is a special data structure that describes to the kernel how the hardware is structured, at which addresses you can find which peripherals. How to write the device tree is defined in the device tree bindings in documentation slash device tree slash bindings in the kernel source. You write those device trees in DTS files for device tree source that compile to DTB files, device tree blob. If you want to use it now, you can enable another kernel option, which is config ARM-appended DTB. Then just concatenate the ZImage and the device tree blob together into one file and give it to your bootloader so it runs those together. The kernel will then just find the device tree and use it. And it will boot a little further. Now, I did this and I got a problem. It booted about this far and then just stopped for no apparent reason. The reason was a bit difficult to debug. It turned out that the vendor kernel on this bot uses only 100 megabytes of the RAM that is actually installed. Actually, the bot has 128 megabytes, but some of it is used for something else that I haven't figured out. So that was really difficult to debug. But once I reduced the amount of memory that is used to those 100 megabytes, it worked with a problem. It got a little further until the point where it complains about no matching timers being found. Linux needs timer interrupts. So it can schedule work. So it can switch between different tasks. For example, when you run two programs in parallel, for timer interrupts to work, you need two things. First, you need a driver for your interrupt controller. An interrupt controller is a little piece of hardware that just collects the interrupts from different peripherals on the chip and signals to the CPU that there was an interrupt. This code running on the CPU can then ask the interrupt controller which interrupt was it and acknowledge that the interrupt was received. So it can just ask the interrupts one by one. With older ARM CPUs, you had custom hardware from the chip vendor. In this case, it was Nuva-Ton. But with newer ARM CPUs, you will usually find that the generic part is used, the ARM GIC, generic interrupt controller. The next thing you need in order to get timer interrupts is a driver that tells the timer hardware to generate interrupts. That's a clock event driver. There's something related in Linux called a clock source driver which is responsible for asking the hardware what time is it on a nanosecond scale. Then there's something else that also has clock in the name, which is the clock subsystem, called CLK. It manages basically everything that has a clock frequency. When you have a CPU that can run at different clock speeds on an ARM system where Linux runs the different frequencies are managed through the clock subsystem. Sometimes when you specify a timer in device tree, it wants a reference to the input clock so it can scale the frequency down correctly. So now it's which number to enter to wait for two seconds, for example. When you don't know how the clock hardware works, but you know the clock frequency, you can just make a dummy note in device tree and say compatible equals fixed clock and then enter the clock frequency and that will work for the moment. Now once you have timer interrupts, Linux should boot to a panic, which is a success. We didn't have a panic before. It panics now because it says VFS unable to mount root FS. It needs a file system, the root file system, where it can find programs to run. Everything you do on a Linux system is basically in programs that are not the kernel itself, but those user space programs like init and shell and wget or curl. Now where do you store that root file system? At this point you don't have a driver for any kind of storage like SD cards or I don't know, Zata. So you need to load the file system together with the kernel in something that's called an init-dramfs, initial run file system. And if you find the right kernel configuration option, you can then put this file root FS to the kernel, so it's in the end just this one file, ZimageDTB that you can load with a bootloader and run. Now if you want to do this manually, just collect all the files that you need in a system, that's a bit tedious, but there are good options like build root for example, go to buildroot.org for more information, where you can just configure Linux user space with all the features you need. When you boot to user space, it's useful to specify on the kernel command line console equals tgy s0,15200. It says the console where the interaction happens is the first zero port and it runs at 115200 bolts. Otherwise the kernel will just assume the wrong board rate and that's a bit difficult to debug. Another useful driver is ethernet. It's a bit more complex than the previous things like interrupt controllers and timers, just because the hardware is a little more complex usually. But it's very useful. Once you have ethernet running or some kind of other networking thing, you can just use W gets to download your kernel and then use kxc to run your kernel. So you don't have to deal with your bootloader anymore. You can basically use Linux as your bootloader now. Of course there are many other functions in a typical system, but once you have the basic drivers, like for serial ports and timers and interrupts and stuff like that, it gets easier to just try different things and you could let different people work on different parts of the chip because they don't depend on each other now. That's a lot of things that can go wrong. If you use the wrong kconfig settings, it might happen that all your programs crash. That happened to me. So it's better to start from a non-good kconfig defconfig file and not change any of the obscure things unless you just want to try out what happens when you change them. If you use ram that is not actually available, weird things will happen. It's difficult to debug. If you don't specify a bot-rate, you will get 9600, which is these days probably not what you want and not what you expect. I got stuck a dozen times during this project. In the end, it still turned out to be useful. I got through the problems that I had. So don't give up. You can do it. When your kernel port works to some degree, you might think about bringing it to the official releases from kernel.org, which are released by Linus Torvalds and Greg Cage. Contributing to the upstream mainline Linux kernel works with patches and mailing lists. You send an email with your change to the right mailing list. They will try to give you a thorough review, tell you what's wrong with it. Then you send a second version, which has all these things fixed. It iterates a bit. But in the end, it should get accepted. It's a bit tedious, but in the end, this is what should result in good quality code in the kernel. If you're interested in upstreaming, I recommend that you watch Greg Krohr-Hartmann's talk and submit your first kernel patch. Thank you for listening. All of the example code used in this project can be found on GitHub at Neuschaffer.com. For general discussions about kernel development, I recommend the DoubleHashKernel IRC channel on FreeNode. You can also find me in the usual IRC channels later when I get home. If there's any time for questions, I could answer questions now. Also, thanks to the stage crew who made it possible to speak here. Okay, I translate to English because the talk was in English. What were my first steps in ARM and Linux on ARM? The first things I did on ARM were basically I did some hacking on my Nintendo Wii U, and I ran my own code in the service processor, which is an ARM CPU. My first steps on Linux on ARM were that I tried to port mainline Linux on an ebook reader that had a broken screen. Okay, now the video is gone again. Okay, now I hear you. What? Do you hear me? Yes, I hear you. Oh, wonderful. Exactly. Now I hear you. Cool. Okay, cool. Okay, now I hear you. Okay, now I hear you. Okay, cool.
|
I will go through the steps it takes to make Linux run on an Arm System-on-Chip where it previously didn't run, or only in a terribly outdated vendor fork. Sometimes you find yourself with a piece of hardware that runs Linux, but only a very outdated version of it. In such cases it can be interesting to port a modern version of Linux. I will go through the configuration and drivers you need to write in order to get Linux booting on an Arm SoC: - Early serial port debugging - Devicetree - Interrupt controller drivers - Timer interrupts - etc.
|
10.5446/52058 (DOI)
|
So, for the next talk, I have Jo van Bulk and Fritz Alder from the University of Löwen in Belgium and David Oswald, a professor for cybersecurity in Birmingham. They are here to talk about the trusted execution environment you probably know from Intel and so on. And you should probably not trust it all the way because it's software and it has its flaws. And so they're talking about ramming enclave gates, which is always good, a systematic vulnerability assessment of TE shielding runtimes. Please go on with the talk. Hi everyone, welcome to our talk. So I'm Jo from the E-Mate Disonet Research Group at Kea Löwen. And today joining me are Fritz also from Löwen and David from the University of Birmingham. And we have this very exciting topic to talk about ramming enclave gates. But before we dive into that, I think most of you will not know what are enclaves, let alone what are these TEs. So let me first start with some analogy. So enclaves are essentially sort of a secure fortress in the processor in the CPU. And so it's an encrypted memory region that is exclusively accessible from the inside. And what we know from the last history of fortress attacks and defenses, of course, is that when you cannot take a fortress because the walls are high and strong, you typically aim for the gates, right? That's the weakest point in any fortress defense. And that's exactly the idea of this research. So it turns out to apply to enclaves as well. And we have been ramming the enclave gate. We have been attacking the input-output interface of the enclave. So a very simple idea, but very drastic consequences, I dare to say. So this is sort of the summary of our research with over 40 interface sanitization vulnerabilities that we found in over eight widely used open source enclave projects. So we will go a bit into detail over that in the rest of the slides. Also, a nice thing to say here is that this resulted in two academic papers to date, over seven CVEs, and altogether quite some responsible disclosure lengthy and viral periods. OK, so I guess we should talk about why we need such enclave fortresses anyway. So if you look at traditional kind of like operating system or computer architecture, you have a very large trusted computing base. So for instance, on the laptops that you most likely use to watch this talk, you trust the kernel, you trust maybe a hypervisor if you have, and the whole hardware under the system, so CPU, memory, maybe hard drive, trusted platform module, and the likes. So actually the problem is here with such a large TCB, trusted computing base, you can also have vulnerabilities basically everywhere, and also malware hiding in all these parts. So the idea of this enclave execution is, as we find for instance in Intel SGX, which is built into most recent Intel processors, is that you take most of the software stack between an actual application, here the enclave app, and the actual CPU out of the TCB. So now you only trust really the CPU, and of course you trust your own code, but you don't have to trust the OS anymore, and SGX for instance promises to protect against an attacker who has achieved a route in the operating system, and even depending on who you ask against, for instance a malicious cloud provider, so imagine you run your application on the cloud, and then you can still run your code in a trusted way, with hardware level isolation, and you have attestation and so on, and you no longer really have to trust even the administrator. So the problem is of course that attack surface remains, so previous attacks and some of them I think will also be presented at this remote congress this year, have targeted vulnerabilities in the micro architecture of the CPU. So you are attacking basically the hardware level, so you have foreshadow, you have micro architecture data sampling, specter and LVI and the likes, but what less attention has been paid to, and what we'll talk about more in this presentation, is the software level inside the enclave, which I hinted at, that there's some software that you trust, but now we look in more detail into what actually is in such an enclave from the software side. So can an attacker exploit any classical software vulnerabilities in the enclave? Yes, David, that's quite an interesting approach, right? Let's aim for the software, so we have to understand what is the software landscape out there for these SGX enclaves and DEs in general. So that's what we did, we started with an analysis and you see some screenshots here, this is actually a growing open source ecosystem, many of these run times, library opening system SDKs. And before we dive into the details, I want to stand still with what is the common factor that all of them share, right? What is kind of the idea of these enclave development environments? So here, what any DE to set execution environment gives you is this notion of a secure enclave oasis in a hostile environment, right? And you can do secure computations in that green box while the outside world is burning. As with any defense mechanism, as I said earlier, the devil is in the details and typically at the gate, right? So how do you mediate between that undisturbed world where the desert is on fire and the secure oasis in the enclave? And the intuition here is that you need some sort of intermediary software layer, which what we call a shielding runtime. So it kind of makes a secure bridge to go from the interstitled to the enclave and back. And that's what we are interested in, right? To see what kind of security checks you need to do there. So it's quite a beautiful picture. You have on the right the fertile enclave, and on the left the hostile desert. And we make this secure bridge in between. And what we are interested in is what if it goes wrong? What if your bridge itself is flawed? So to answer that question, we look at that yellow box and we ask what kind of sanitizations, what kind of security checks do you need to apply when you go from the outside to the inside and back from the inside to the outside? And one of the key contributions that we have built up in the past two years of this research, I think, is that the yellow box can be subdivided into two smaller subsequent layers. And the first one is this ABI application binary interface, very low level CPU state. And the second one is what we call API application programming interface. So that's the kind of state that's already visible at the programming language. Then the remainder of this presentation, we will kind of guide you through some relevant vulnerabilities at both these layers to give you an understanding of what this means. So first Fritz will guide you to the exciting low level landscape of the ABI. Yeah, exactly. And you just said it's the CPU state, and it's the application binary interface. But let's take a look at what this means, actually. So it means basically that the attacker controls the CPU register contents. And we do have to proceed. And that on every enclave entry or on every enclave exit, we need to perform some tasks so that the enclave and the trust runtime have some well initialized CPU state. And the compiler can work with the calling conventions that it expects. So these are basically the key part. We need to initialize the CPU registers when entering the enclave and scrubbing them when we exit the enclave. So we can't just assume anything that the attacker gives us as a given. We have to initialize it to something proper. And we looked at multiple TTE runtimes and then multiple TEs. And we found a lot of vulnerabilities in this ABI layer. And one key insight of this analysis is basically that a lot of these vulnerabilities happen on complex instruction set processors. So on CISC processors. And basically on the InternetGX TTE. We also looked at some RISC processors. And of course, it's not representative. But it's immediately visible that a complex x86 ABI seems to have a way higher, larger tech surface than the simpler RISC designs. So let's take a look at one example of this more complex design. So for example, there's the x86 string instructions that are controlled by the direction flag. So there's a special x86 rep instruction that basically allows you to perform stream memory operations. So if you do like a memset on a buffer, this will be compiled into the rep string operation instruction. And the idea here is basically that the buffer is read from left to right and well, written over by memset. But this direction flag also allows you to go through it from right to left, so backwards. Let's not think about why this was a good idea or why this is needed. But definitely it is possible to just set the direction flag to one and run this buffer backwards. And what we found out is that the system VABI actually says that this must be clear or set to forward on function entry and return. And that compilers expect this to happen. So let's take a look at this when we do this in our enclave. So in our enclave, when we in our trusted application perform this memset on our buffer, on normal entry with the normal direction flag, this just means that we walk this buffer from front to back. So you can see here, it just runs correctly from front to back. But now if the attacker enters the enclave with the direct direction flag set to one, so it sets to run backwards, this now means that from the start of our buffer, so from where the pointer points right now, you can now see it actually runs backwards. So that's the problem. And that's definitely something that we don't want in our trust applications. Because, well, as you can think, it allows you to override keys that are in the memory locations that you can go backwards. It allows you to read out things. That's definitely not something that is useful. And when we reported this, this actually got a nice CD assigned with the base core high, as you can see here in the next slide. And, well, you may think, OK, well, that's one instance. And you just have to think of all the flex to sanitize and all the flex to check. But wait, of course, there's always more. So as we found out, there's actually the floating point units, which comes with a whole lot of other registers and a whole lot of other things to exploit. And I will spare you all the details. But just for this presentation, just know that there is an older x87FPU and new SSE that does vector float. This is vector floating point operations. So there's the FPU control word and the MXCSR register for these newer instructions. And this x87FPU is older, but it's still used, for example, for extended precision, like long double variables. So old and new doesn't really apply here because both are still relevant. And that's kind of the thing with x86 and x87 here, that old archaic things that you could say are outdated are still relevant or are still used nowadays. And again, if you look at the system VABI now, we saw that these control bits are coalescated. So they are preserved across function codes. And the idea here is, which to some degree holds merit, is that these are some global states that you can set. And they are transferred within one application. So one application can set some global state and keeps the state across all its usage. But the problem here, as you can see here, then is our application or Enclave is basically one application. And we don't want our attacker to have control over the global state within our trusted application. So what happens if PUS settings are preserved across codes? Well, for a normal user, let's say we just do some calculation inside the Enclave, like 2.1 times 3.4, which just nicely calculates to a 7.14 long double. That's nice. But what happens if the attacker now enters the Enclave with some corrupt precision and rounding modes for the FPU? Well, then we actually get another result. So we get distorted results with the lower precision and different rounding modes. So it's actually rounding down here, whenever it exceeds the precision. And this is something we don't want. So this is something where the developer expects certain precision or a long double precision, but the attacker could actually just reduce it to a very short precision. And we reported this, and we actually found this issue also in Microsoft Open Enclave. That's why it's marked as not exploitable here. But what we found interesting is that the Intel SGX SDK, which was vulnerable, patched this with an X restore instruction, which completely restores the extended state to a known value. While Open Enclave only restores the specific register that was affected, the load MXC SR instruction. And so let's just skip over the next few slides here, because I just want to give you the idea that this was not enough. So we found out that even if you restore this specific register, there's still another data register that you can just mark as in use before entering the Enclave, and with which the attacker can make that any floating point calculation results in another number. And this is silent, so this is not programming language specific, this is not developer specific. This is a silent ABI issue that the calculations are just not a number. So we also reported this. And now, thankfully, all Enclave runtime uses full X restore instruction to fully restore this extended state. So it took two CDs, but now luckily, they all perform this nice full restore. So I don't want to go into the full details of our use cases now or of our case studies that we did now. So let me just give you the ideas of these case studies. So we looked at these issues and wanted to look into whether they just feel difficult or whether they are a bet. And we found that we can use overflows as a side channel to deduce secrets. So for example, the attacker could use this register to unmask exceptions that inside the Enclave are then triggered by some input dependent multiplication. And we found out that these side channels, if you have some input dependent multiplication, can actually be used in the Enclave to perform a binary search on this input space. And we can actually retrieve this multiplication secret with a deterministic number of steps. So even though we adjust it like a single mask, we flip, we can actually retrieve the secret with deterministic steps. And just so that you know there's more you can do, we can also do machine learning in the Enclave. So if you all said it nicely, you can run it inside the TE, inside the cloud. And that's great for machine learning, right? So let's do hand written digit recognition. And if you look at just the model that we look at, we just have two users where one user pushes some machine learning model and the other user pushes some input. And everything is protected with Enclave, right? So everything is secure. But we actually found out that we can poison these FPU registers and degrade the performance of this machine learning down from all digits were predicted correctly to just 8% of digits were correctly. And actually, all digits were just predicted the same number. And this basically made this machine learning model useless. There's more we did. So we can also attack Blender with image differences, slight differences between Blender images. But that's just for you to see that it's small. But it's a tricky thing and intricate that can go wrong very fast on the ABI level once you play around with it. So this is about the CPU state. And now we will talk more about the application programming interface that I think more of you will be comfortable with. Yeah, we take, thank you Fritz. We take quite a simple example. So let's assume that we actually load the standard Unix binary into such an Enclave. And there are frameworks that can do that, such as a Graphina. So and what I want to illustrate with that example is that it's actually very important to check where a point has come from. Because the Enclave kind of partitions the memory into untrusted memory and Enclave memory. And they live in a shared address space. So the problem here is as follows. Let's assume we have an echo binary that just prints an input. And we give it as an argument string. And that normally, when everything is fine now, points to some string, let's say, hello world, which is located in the untrusted memory. So if everything runs as it should, this Enclave will run, will get the pointer to untrusted memory, and will just print that string. But the problem is now actually the Enclave has access also to its own trusted memory. So if you don't check this pointer and the attacker passes a pointer to the secret that might live in Enclave memory, what will happen? Well, the Enclave will fetch it from there and will just print it. So suddenly you have turned this kind of like into a memory disclosure vulnerability. And we can see that in action here for the framework named Graphine that I mentioned. So we have a very simple hello world binary. And we run it with a couple of command line arguments. And now in on the untrusted side, we actually change a memory address to point into Enclave memory. And as you can see, normally it should print here test. But actually it prints a super secret Enclave string that lift inside the memory space of the Enclave. So these kind of vulnerabilities are quite well known from user to kernel research and from other instances. And they're called confused deputies. So the deputies kind of like has a gun, can read the Enclave memory and suddenly then does something which is not supposed to do because it didn't really check where the memory should belong or not. So I think this vulnerability seems to be quite trivial to solve. You simply check all the time where pointers come from. But as you will tell, it's often not quite that easy. Yes, David, that's quite insightful. We should check all of the pointers. So that's what we did. We checked all of the pointer checks. And we noticed that Intel has a very interesting kind of algorithm to check these things. Of course, the Intel code is high quality. They checked all of the pointers. But you have to do something special for strings. And we're talking here the C programming language. So strings are null terminated. They end with a null byte. And you can use the function sdln string length to compute the length of the string. And let's see how they check whether your string lights completely outside of Enclave memory. So the first step is you compute the length of the interested string. And then you check whether the string from start to end lights completely outside of the Enclave. That sounds legit, right? And you reject the string. So this works beautifully. Let's see however how it behaves when we pass on a legal string. So we are not going to pass the string hello world outside of the Enclave. But we pass some string secret one that lies within the Enclave. So the first step will be that the Enclave starts computing the length of that string that lies within the Enclave. And that sounds already fishy. But then luckily everything comes OK, because then it will detect that this actually should never have been done and that the string lies inside the Enclave. So it will reject the e-cals, or the call into the Enclave. So that's fine. But some of you who know side channels know that this is exciting, right? Because the Enclave did some computation it was never supposed to do. And the length of that computation depends on the amount of non-zero bytes within the Enclave. So what we have here is a side channel where the Enclave will always return false. But the time it takes to return false depends on the amount of zero bytes inside that secret Enclave memory blob. So that's what we found. We were excited. And we said, OK, that's a simple timing channel. Let's go to that. So we did that. And you can see a graph here. And it turns out it's not as easy as it seems, right? So I can tell you that the blue one is for a string of length one and the red one is for a string of length two. But there is no way you can see that from that graph. Because x86 processors are lightning fast so that one single increment instruction is completely dissolves into the pipeline. You will not see that by measuring execution time. So we need something different. And while we are smart, so we read papers. And in literature, one of the very common attacks in SJX is also something that Intel describes here. You can see which memory pages 4K memory blocks are being accessed while the Enclave executes because you control the operating system and the paging machinery in there. So that's what we tried to do. And we thought this is a nice side channel. And we were there sketching our head looking at that code, a very simple for loop that fits entirely within one page and a very short string that fits entirely within one page. So just having access to 4K memory regions is not going to help us here, right? Because vote the code and the data fit on a single page. So this is essentially what we call the temporal resolution of the side channel. This is not accurate enough. So we need another trick. And well, here we have been working on quite an exciting framework. It uses interrupts. And it's called SGX Step. So it's a completely open source framework on Hitup. And what it allows you to do essentially is to execute an Enclave one step at a time. Hence the name. So it allows you to interleave the execution of the Enclave with attacker code after every single instruction. And the way we pulled it off is highly technical. We have this Linux kernel driver and a little library operating system running in user space. But that's a bit out of scope. The matter is that we can interrupt an Enclave after every single instruction. And then let's see what we can do with that. So what we essentially can do here is to execute that for loop with all these x86 increment instructions one at a time. And after every interrupt, we can simply check whether the Enclave accessed the string residing at our target as memory location. Another way to think about it is that we have that execution of the Enclave. And we can break that up into individual steps and then just count the steps. And hence, have a deterministic, eigen-alive timing channel. So in other words, we have an Oracle that tells you where all zero bytes are in the Enclave. I don't know if that's useful, actually, Dave. So it turns out it is. I mean, some people who might be into exploitation already know that it's good to know whether zero is somewhere in memory or not. And we now do one example. We break AES and I, which is the hardware acceleration of Intel process for AI. So finally, that actually operates only on registers. And Joe just said he can do that on pointers, on memory. There's another trick that comes into play here. So whenever the Enclave is interrupted, it will store its current registers state somewhere to memory called the SSA frame. So we can actually interrupt the Enclave, make it write its register to SSA memory, and then we can run the zero byte Oracle on the SSA memory. And what we figure out is where zero is, or if there's any zero in the AES state. So I don't want to go into the gory details of AES. But what we basically do is we find whenever there's zero in the last in the state before the last round of AES, and then that zero will go down through the S-box, will be x or to a key byte, and then that will give us a ciphertext. But we actually know the ciphertext byte. So we can go backwards. We can compute from the zero up to here and from here to this x or. And that way, we can compute directly one key byte. So we repeat that whole thing 16 times until we have found zero in every byte of this state before the last round. And that way, we get the whole final round key. And for those that know AES, if you have one round key, you have the whole key in it. So you get like the original key. You can go backwards. So sounds complicated, but it's actually a very fast attack when you see it running. So here is SGX step doing this attack. And as you can see within a couple of seconds, and maybe 520 invocations of AES here, we get the full key. So that's actually quite impressive, especially because one of the points in AES and I is that you don't put anything in memory, but there's this interaction with SGX, which is kind of like allows you to put stuff into memory. So I want to wrap up here. We have found various other attacks. So both in research code and in production codes, such as the Intel SDK and the Microsoft SDK, and they basically go across the whole range of vulnerabilities that we have often seen already from user to kernel research. But there are also some interesting new kind of vulnerabilities due to some of the aspects we explained. There was also a problem with O-calls, that is when the enclave calls into untrusted codes, so that is used when you want to, for instance, emulate system calls and so on. And if you return some kind of like a wrong result here, you could again go out of bounds. And they were actually quite widespread. And then finally, we also found some issues with padding, with leakage in the padding. I don't want to go into details. I think we have learned a lesson here that we also know from the real world. And that is important to wash your hands. So it's also important to sanitize, enclave state, to check pointers and so on. So that is kind of one of the takeaway messages, really, is that to build enclave securely, yes, you need to fix all the hardware issues, but you also need to write safe code. And for enclaves, that means you have to do proper ABI and API sanitizations. And that's quite a difficult task, actually, as we've seen, I think, in that presentation. There's quite a large attack surface due to the attack model, especially of Intel SGX, where you can interrupt after every instruction and so on. And I think from a research perspective, there's really a need for a more principled approach than just bug hunting. If you want, maybe we can learn something from the user-to-carnal analogy, which I invoked, I think, a couple of times. So we can learn kind of like what an enclave should do from what we know about what a kernel should do. But there are quite important differences also that need to be taken account. So I think, as you said, all our code is open source, so you can find that on the below GitHub links. And you can, of course, ask also questions after you have watched this talk. So thank you very much. Hello. So back again, here are the questions. Hello to see you live. We have no questions yet, so you can put up questions in the IRC below if you have questions. And on the other hand, oh, let me close this up. So I'll ask you some questions. How did you come about this topic and how did you meet? Well, that's actually interesting. I think this research has been building up over the years. And there is some. So I think some of the vulnerabilities from our initial paper, I actually already started in my master thesis to sort of see and collect. And we didn't really see the big picture until I think I met David and his colleagues from Birmingham at an event in London, the RISE conference. And then we started to collaborate on this and to look at this a bit more systematic. So I started with this whole list of vulnerabilities. And then with David, we kind of made it into a more systematic analysis. And that was sort of a Pandora's box, I dare to say. From that moment on, these kind of same errors keeping repeated. And then also Fritz, who recently joined our team in Leuven, started working together with us on more of this low level CPU state. And that's Pandora's box in itself, I would say, especially one of the lessons, as we say there, that x86 is extremely complex. And turns out that almost all of that complexity, I would say, can be abused potentially by adversaries. So it's more like a fractal in a fractal in a fractal. You're opening a box and you're getting more and more of questions out of that. In a way, I think, yes, I think it's fair to say this research is not the final answer to this. But it's rather an attempt to give a systematic way of looking at a poorly never ending attacker defender race. So there is a question from the internet. So are there any other circumstances where AES minus NE is writing its registers into memory, or is this exclusive to SGX? Should I repeat? I do not understand the question either. So I think the question is that this AES attack that David presented depends on, of course, having a memory disclosure attack to read the register content. And we pull that off using SGX step to forcibly write the register content into memory. So that is definitely SGX specific. However, I would say one of the lessons from, let's say, the past five years of SGX research is that often these things generalize beyond SGX. And at least the general concept of, let's say, the insight that CPU registers end up in memory one way or another sooner or later. I think that also applies to operating systems. If you somehow can force an operating system to context it between two applications, it will also have to dump the register content temporarily in memory. So if you would have something similar like what we have in an operating system kernel, you would potentially mount a similar attack. But maybe David wants to say something about operating systems there as well. No, not really. I think one thing that helps with SGX is that you have very precise control, as you explained with the interrupts and stuff, because you root outside the enclave. So you can single step, essentially, the whole enclave whereas interrupting the operating system at exactly repeatedly, at exactly the point you want or some other process or so tends to be probably harder just by design. But of course, on a context switch, CPU has to save somewhere its register set, and then it will end up in memory in some situations, probably not as controlled as it is for SGX. So there is the question, what about other CPU architectures other than Intel? Did you test those? So maybe I can go into this. So well, interest SGX is the, let's say, largest one with the largest software base and the most enclave shooting runtimes that is also that we could look at. But there are, of course, some others. So for example, we have this internal TE that we developed at SanQus. Some years ago, it's called SanQus. And of course, for these, there are similar issues. So you always need the software layer to interact, to enter the enclave and to access the enclave. And I think you and David in the earlier work also found issues in our TE. So it's not just Intel and related projects that mess up, there, of course. But what we definitely found is that it's easier to think of all edge cases for simpler designs like RISC-5 or simpler RISC designs than for this complex X86 architecture. So right now, there are not that many besides Intel SGX. So they have the advantage and disadvantage of being the first widely deployed, let's say. But I think as soon as others start to grow out and simpler designs start to be more common, I think we will see this that it's easier to fix all edge cases for simpler designs. OK, so what is a reasonable alternative to TE? Or is there any? You do you want to take that or should I say what? Well, we can probably both give our perspective. So I think, well, the question to start with, of course, is do we need an alternative? Or do we need to find more systematic ways to sanitize the software layers? That's, I think, one part of the answer here, that we don't have to necessarily throw away TEs because we have problems with them. We can also look at how to solve those problems. But apart from that, there is some exciting research that maybe David also wants to say a bit more about, for instance, on capabilities. That's in a way not so different than TE is necessary. But when you have hardware support for capabilities, like the Cherry Project in Cambridge, which essentially associates metadata to a pointer, metadata like permission checks, then you could, at least for a subclass of the issues we talked about, pointer poisoning attacks, you could natively leakage those with hardware support. But it's a very high level idea. Maybe David wants to say something. Yeah, so I think like an alternative to TE is, whenever you want to partition your system into parts, which is, I think, a good idea. And everybody is now doing that also in there, how we build online services and stuff. So TEs are one system that we have become quite used to from mobile phones or from maybe even from something like a banking card or so, which is also like a protected environment for a very simple job. But the problem then starts when you throw a lot of functionality into the TE. As we saw the trusted code base becomes more and more complex and you get traditional bugs. So think like, yeah, it's really the question if you need an alternative or a better way of approaching it, how your partition software. And as you mentioned, there are some other things you can do architecturally so you can change the way we or extend the way we build architectures, for instance, with this capabilities and then start to isolate components, for instance, in one software project, say in your web server, you isolate the TLS stack or something like this. And also thanks for the people noticing the secret password here. So obviously only for decoration purposes to give the people something to watch. So but it's not fundamentally broken, is it? SGX. Yeah, no, SGX TE. I mean, TE's are the menu of them. I think like you cannot say fundamentally broken for it. But for SGX it has. The question I had was specifically for SGX at that point, because Signal uses it, mobile coin, cryptocurrency uses it and so on and so forth. Is that fundamentally broken or would you rather say? So I guess it depends what you call fundamentally. So there has been in the past we have worked also on what I would say full breaches of SGX. But they have been fixed. And it's actually quite a beautiful instance of where research can have short term industry impact. So you find a vulnerability, then the vendor has to devise a fix that they are often not reveal. And they often work around to the problem. And then later, because we are talking of course about hardware roots of trust, so then you need no processes to really get a fundamental fix for the problem. And then you have temporary work arounds. So I would say, for instance, a company like Signal using SGX, if they. So it does not give you security by default. You need to think about the software. That's what you focused on in this talk. You also need to think about all of the hardware microcode patches and or new processors to take care of all the known vulnerabilities. And of course the question always remains, are there vulnerabilities that we don't know yet? That's with any secure system, I guess. But maybe also David wants to say something about some of his latest work there. That's a bit interesting. Yeah, so I think what you said, or my answer to this question would be, it depends on your threat model, really. So some people use SGX as a way to kind of like remove the trust in the cloud providers. So you say like as in Signal or so, I move all this functionality that is hosted maybe on some cloud provider into an SGX enclave. And then I don't have to trust the cloud provider anymore because SGX also has some form of protection against physical access. But recently we actually, we published another attack which shows that if you have hardware access to an SGX processor, you can inject faults into the processor by playing with the undervolting interface with hardware access. So you really solder to the main board to a couple of wires on the bus to the voltage regulator. And then you can do voltage glitching as some people might know from other embedded contexts. And that way then you can flip bits essentially in the enclave and of course do all kinds of, kind of like inject all kinds of evil effects that then can be used further to get keys out or maybe hijack control flow or something. So it depends on your threat model. I wouldn't say still that SGX is completely pointless. It's, I think better than not having it at all, but it definitely cannot, you cannot have like complete protection against somebody who has physical access to your server. So I have to close this talk. It's a bummer. I would ask all that questions that are flowing here. But one very, very fast answer please. What is that with a password in your background? I explained it. It's of course like just a joke. So I'll say it again because some people seem to have taken it seriously. So it was such an empty whiteboard. So I put a password there. Unfortunately, it's not fully visible in the, in the stream. Okay. So I thank you, Jo van Boek, Fritz Alder, David Oswald. Thank you for having that nice talk. And now we make the transition to the Herod New Show. Thank you.
|
This talk presents an extensive security analysis of trusted-execution environment shielding runtimes, covering over two years of continuing research and leading to 7 CVE designations in industry-grade Intel SGX enclave SDKs. For the first time, we develop a systematic way of reasoning about enclave shielding responsibilities categorized across 11 distinct classes across the ABI and API tiers. Our analysis revealed over 40 new interface sanitization vulnerabilities, and we developed innovative techniques to aid practically exploitation through among others CPU register poisoning, timer-based single-stepping, rogue CPU exception handlers, and side-channel-based cryptanalysis. We finally analyze tendencies across the landscape and find that developers continue to make the same mistakes, calling for improved vulnerability detection and mitigation techniques. This talk overviews the security and state of practice of today's Trusted Execution Environment (TEE) shielding runtimes from both industry and research. Our systematic analysis uncovered over 40 re-occurring enclave interface sanitization vulnerabilities in 8 major open-source shielding frameworks for Intel SGX, RISC-V, and Sancus TEEs. The resulting vulnerability landscape enables attackers to poison victim programs through both low-level CPU state, including previously overlooked attack vectors through the x86 status flags and floating-point co-processor, as well as through higher-level programming constructs such as untrusted pointer arguments passed into the shared address space. We develop new and improved technique to practically exploit these vulnerabilities in several attack scenarios that leak full cryptographic keys from the enclave or enable arbitrary remote code reuse. Following extended responsible disclosure embargoes, our findings were assigned 7 designated CVE records and led to numerous security patches in the vulnerable open-source projects, including the Intel SGX-SDK, Microsoft's Open Enclave, Google's Asylo, and the Rust compiler. Our findings highlight that emerging TEE technologies, such as Intel SGX, are _not_ a silver-bullet solution and continue to be misunderstood in both industry and academia. While promising, we explain that TEEs require extra scrutiny from the enclave developer and we set out to identify common pitfalls and constructive recommendations for best practices for enclave interface sanitization. Throughout the talk, we overview shielding responsibilities and argue that proper enclave hygiene will be instrumental to the success of the emerging Intel SGX ecosystem. Additionally, we point to several subtle properties of the Intel x86 complex instruction set considerably increase the attack surface for enclave attackers and require the end developer to be aware of their respective shielding runtime or apply additional sanitizations at the application level itself.
|
10.5446/52059 (DOI)
|
It's the ambitious goal to make your own wafer. Well, you might be in the right talk now. In the following lecture, I couldn't resist our presenter, Nudl, who will explain how to perform electron beam lithography to do it yourself. Let me switch over to Nudl, who is really down to the atomic level. Hello, kids. So hello. Hi there. Hi. Our talk is about how to make your own crappy PMMA-based resist for EBL. So it's not that it already works perfectly, but you know, it's kind of a start. Our primary interest is in kind of on-ship photonics, if you will. So by that, we mean integrating photonic components like waveguides or gratings on a ship scale level. And we're also interested in making our own MEMS devices. So yeah, I guess that sums up what we're trying to do. Of course, you could also use EBL and masks generated by it for semiconductor stuff, but that's kind of off our scope, to be honest. But also very fascinating stuff, actually. So we want to make EBL at home. Besides an SEM, which is actually relatively cheap, between 400 and 1500 euros if you lurk long enough on eBay. So besides an SEM, you need the chemistry. And the thing is, the chemistry the pros are using for lithography in general is price-wise pretty hefty. If you go to those go-to vendors, it easily adds up to much more than the price for a used shitty old SEM itself. So what to do? Right, cook your own soup. And maybe it's good enough for your and our purposes. So what is EBL? EBL stands for electron beam lithography. In the end, it's simply using the rig we all know for scanning electron beam microscopy for lithography. The scanning beam changes the chemical properties like the solubility of a resist. And that's the way you can draw figures into the resist with a beam. What sucks? It's super slow compared with other lithography principles. The electrons are charging the resist and maybe interfere with the substrate or whatever you want to EBL in a bad or maybe destructive way. To prevent this, you could try to coat your stuff with a thin, hopefully not disturbing conductive layer. But sometimes that's not possible because it could undermine what you're actually trying to accomplish. It's also necessary that your workpiece is somehow vacuum compatible. If not, maybe it's not the right process for it. On the pro side, you have the fact that it needs no mass. Of course, there are other maskless processes like using DLP, DMD-based projectors or laser writing, awesome stuff to be honest. The other thing is that in principle, you can achieve pretty high resolutions with EBL. Because you're using electrons, you can neglect that Ibrowly wavelength above 100 nanometerish resolutions completely. There is a wide variety of resists you can use for EBL. They have different fields of application. We have tried nitrocellulose NC, but it didn't work. So we switched to one of the most common EBL resists, PMMA. PMMA comes in chains of different lengths or molecular weight. You can cut or cross link them. As far as we know, PMMA is the best documented resist for EBL. It's easily available and cheap. You can use many different solvents. Some of them aren't really toxic. If you let modern resists like ZEP aside, PMMA can achieve the highest resolutions and aspect ratios. We use PMMA as a positive resist. That means that the electrons induce a chain-cision process, which cuts the PMMA chains into smaller parts, which makes them easier soluble in the developers. The positive process itself is relatively easy. You apply the resist via spin coating. Then you have to pre-bake it to remove the remaining solvent from the resist layer. After that, you put it in your EBL system and expose it. Then you remove the exposed resist parts with the developer. We are using MIBK for that. After that, you can do the next process step like physical vapor deposition or electroplating to get the final structures you want. The developers' composition we are using is IPA with MIBK 3-1. There are many applications of structured resists. You can use it as a mask, as already explained. But you can also use it as a structural component for chemistry on ship applications or in MEMS or as a structural dielectric for high-frequency and optical components. As the EBL system, we are using an old SEM. We retrofitted it using a red petier as the signal generator. To design your masks or geometry in general, you can use every CAD toolchain you want. If you use hardware like the red petier, you have access to a wide variety of tools and scripts doing the boring stuff like file parsing for you. We should also mention the most common mask file formats. First there's GDS2 and the more modern OASIS. Then there is an idea we had during the last days that it could be worth it to try to define yet another file format for that task. Simply because those two that we mentioned are both lacking many modern features, which you know from modern CAD formats. There are multiple strategies to scan or expose only the parts of the resist you want to be exposed. The most common ones are line-style scanning with a fast beam on the dark parts and a slow beam on the bright parts and scanning fast between the polygons and slow inside the polygons you want to expose. We are using red petier because they're readily available and do the job. Using simple Jupyter notebooks makes it easy to script your experiments. Electron optics is pretty similar to normal optics from the user perspective, so you also have distortions and such effects. But you can transform your exposure scan to compensate for that. For finding the so-called camera intrinsics and also for fiducial detection, if you are in need of multiple exposure passes, we recommend OpenCV. Most algorithms you need are readily available. Another annoying effect comes from the fact that the beam itself is not infinitely sharp and from the scattering of the electrons in the resist and back-scattering from the substrate. That's really the resolution killer if you go for high resolutions. There are different ways to counteract that, one is to increase the electrons energy to make the entry region tighter. An additional way is to correct the effect via deconvolution. To generate the necessary deconvolutions kernels, you can measure the effect or simulate it. PECBL is one open source tool for doing that. And that's one form of PMMA we used. You can easily buy it on Amazon. Normally it's used to fix fingernails I guess. DCM is one very popular solvent, but it's also not very healthy. Xylol also works and it's not that bad. Tolyol is pretty good but also relatively toxic. Any sol could be the solvent of choice. It's not toxic relative to the other stuff and it works good with PMMA, but you have to heat it up to 70 to 90 degrees Celsius. That's our improvised spin-coater. It's a DREAML with an SEM stub holding the substrate piece. After spin-coating you have to pre-bake it at approximately 100 degrees Celsius for a couple of minutes to remove the solvent. Okay, that's the test pattern. And our first result, not that good. The reference square is 100 micrometers. Another try, the distance between the scale lines is 10 micrometers. One example of accidental crosslinking. Most of the resist was removed, but some parts with high dosage are of decreased solubility. That's a better example. Another example of accidental crosslinkage. We made last night, both squares are 10 micrometers. And thanks to SleepyOwlJoyce for providing us with the wafers. So thank you. Thank you. Any questions? Thanks for the talk. So let's have a look. Do we have questions? No. We don't have any questions here at the moment. Guys, if you want to ask something, if you want to ask Nul, go ahead, use our communication channels, Twitter, Mestodon and IRC. Otherwise, let me quickly ask you, Nul, what about the resolution range you can get with this technique, with this with yourself technique? Your question is how far we got, right? So yeah, in the sub-micrometer scale. But it's still work in progress. So yeah, if we figure out how to achieve 100 nanometers, I hope. So somebody's interested if the slides of this talk are going to be published? Yeah, I can publish them. Okay. How can they be found? I put them on GitHub. And link it on our Twitter account. That sounds good. The Twitter account, yeah, is available. So we got it. Thanks. So far, no more questions. Yep. Let's see if something drops in. Is there anything you want to add to this talk here at this moment? No. Not much. Okay. Yeah. So yeah, there's one thing. You mentioned one substance which was not as healthy as it should be or could be. So what's the toxicity of this one component you mentioned? Which one? Toluol or DCM? Both are, yeah, give you cancer if you're not. Oh, okay. Yeah. So gloves and everything. Okay. Well, yeah. And I would recommend a fume hood. Yeah. Yeah. Yeah. Well, in the kitchen it's possible, but you don't want to mess around with those chemicals in your kitchen, I'd say. Yeah, that's true. Okay. More questions. That's good. So we have some questions coming up. What structures actually do you want to achieve by the end of this project? Is there anything you have in mind, something dedicated goal you want to go for and say, okay, we got it now with this type of process? Optical gratings for 400 nanometer laser light, certain application waveguides and gratings. Oh, okay. That's cool. So we're going to implement kind of external, small external cavities for diode lasers. Yeah. Okay. Okay. Are those lasers also, do it yourself type of lasers or do you buy them or do you have a lab accessible for that type of work? I buy the diode lasers. So normal laser diodes. Okay. Yeah. Fabry Perot laser diodes. I want to modify them. Yeah. Sure. What's the output power of those laser diodes? In the mode, I want to use them, yeah, 10 milliwatts, but not very high power. Well, I think you can already destroy your eye with that. Can you? Maybe. Yeah. I'm not sure. Well, we don't want to experiment that. We don't want to recommend anything like that. Well, folks, you're out there. You know it anyway. Next question is what does the wafer cost IRL and how much is an electron beamer? So our electron beam device is a very old electron microscope, a scanning electron microscope. And those are around 500 to 1500 euros on eBay. On eBay? Really? Okay. Yeah. They are pretty old ones. Sure. And yeah, you only have to attach a signal generator like a red pitaya or something like that. And you have a crappy EVL brick. Yeah. So how much power does it suck? So what's the? Including the pumps, it's two kilowatts, one kilowatt. So that doesn't. I haven't measured. So I'm not sure. But nothing special needed so you can actually. Yeah, exactly. More or less. Cool. Well, have you actually tried optical UV before using EBL? No. Why? Just because you're interested in electron beamers? Yeah, that's one thing. Okay. The other thing is that I'm not sure how to achieve the resolution I want to have. And I guess in all in all an EVL system is a more simpler system. Okay. Okay. Micrometer resolutions. I see. All right. Well, there's another question here. It's about frequency or so. How fast can the IC switch? How fast can you actually clock it? What? Well, yeah, somebody's asking how fast you could actually switch. I mean, if you're producing IC with that technique, how fast can you? So we don't want to produce semiconductor devices like processors or something like transistors. It's more about means, micromechanical devices and optical devices. Okay. That's our focus. I see. I see. Yeah. Next question, of course, where can you buy the chemicals? Do you need some special license for that? For example, chemical license or so? I mean, some poisonous license, at least? In Germany, they are available. So you can order them at some online shops like S3 chemicals if someone is interested. So normally, if you are a larger research facility or something like that, you are able to order them at Sigma Eudrich. But they don't talk to mortals. I see. Okay. Now, the questions are flowing in now. That's pretty nice, actually. Do you package the IC somehow? Not yet. So it's not IC in the common sense. Yeah. Currently, I'm figuring out how to mix proper resist. So I'm in the very early stage of doing it. So no IC packaging necessary currently. Okay. Yeah. How long have you been actually working on that project when we're talking about timeline now? Yeah, a couple of months. So yeah. Well, it's pretty short success then. Another question here, how do you bond those parts, actually, what material do you use to pack them up? Something like epoxy? So at the current stage, there is no bonding. But you can use epoxy glue to glue the chip or whatever on the holder. Yeah. Well, there is another question, more like a motivational question. If you could probably name some exemplary projects that can be tinkered together, which wouldn't be possible without that type of approach you are pursuing here? Yeah, compact optical devices. So I guess it's the only way to make that. Okay. It goes back to your answer before. That's my motivation. So I know that there's different other stuff you can do with it. I'm not into it. Yeah. So you're more into optics? Okay. There is another question that got in. Would it be possible to do those chips with gates and everything or do you need other more exotic stuff? Yeah. Which isn't available? If you want to build semiconductor ships with transistors, you need a diffusion or ion implanting and such stuff. That's much more complicated than only the lithography for the masks. Okay. And well, just my imagination. Where did you guys actually set that all up in your cellar, in the garage, somewhere, in rented space or hack space? Where are you actually doing that? I mean, the machine is... Our laboratory is more or less in the basement, if you want to. Okay. Is it somehow... Well, how shall I say? Is it... Can it be influenced by motion? Not by motion. By... Disturbed, basically, by vibrance. Yeah. Yes. If you go for really high resolutions, we are far away from that kind of super high resolutions. You have problems with oscillations induced by some traffic or something like that. Okay. Not traffic. Yeah, exactly. If something outside shakes a little bit. That would be a problem if we want to crack some hypothetical boundary like 10 or so nanometers, I guess, if that's not easily possible in a normal room. So... Yeah. In a normal house without... Yeah. Sure. It's a proper isolation. The theoretical limit with this technique would be achievable, 10 nanometers. That's pretty small. Yeah, theoretical, it's possible. Okay. With EBL in general. No. Not with our rig, but with a professional one, you can get down to five nanometers or less. So... Yeah. I don't know what the actual limit currently is. So... Yeah. Another question popped up. So do you further treat the finished wafer after etching? So something you do there? My plan is to electroplate some structures on it. Electrochemical plating and physical vapor deposition. That are the two things I want to do. All right. In the first place. And maybe a little bit of anisotropic and isotropic etching of the substrate. So, but first I want to have a reliable and repeatable resist working. Yeah. And somebody is interested. If you could actually use, if you could dose the material somehow to get fats. Okay. Yeah. That's the semiconductor stuff. So with diffusion or ion implanting, that would be possible, but I don't own the tools for that. So I don't know. I'm more an optics guy. Okay. Yeah. Back again to the optics. I completely understand. Yeah. All right. Let me see if there's anything popping up question-wise. We had a lot already. I think there is. There was a very, very much feedback at the moment. Don't see any new questions popping up. I thank you again. I apologize for the mega delay we had. No. Take off the glitches. Yeah. But we made it. I'm happy about that. Thanks again. And let me quickly announce now in a couple of seconds, we're going to switch over to the Harold News Show over there. Oh, okay. Yes. Okay.
|
Photoresists are one of the essential ingredients for chip manufacturing and micro/nano engineering. We will show how we’re using them in a DIY Electron Beam Lithography set-up and how you’re able to cook your own cheap resists and mix your own developers. Resists? What's that? What are their applications How does it work (types (positive/negative), chemistry, proximity effect, dosage etc) EBL? What's that? How does it work Pros & cons: comparison between EBL (slow) photolithography (fast) Which resist can I use for EBL DIY cooking of PMMA based resists Comparison of different solvents Composition of different developers Comparison of different developers Applications Usage as a mask Usage as a structural dielectric material ...(?) The EBL exposure process simple SEM retrofitted with an EBL controller The common file formats (GDSII & OASIS) Scan-Gen: how to generate the proper curves for the exposure Hardware: off-the-shelf embedded modules like the RedPitaya Generation of different filling curve styles, calibration, compensation of the proximity effect, correction of the SEMs intrinsic parameters, dosage! Fiducial detection and alignment for multi-pass/multi-layer processes Stuff comes together walk through the complete process along a simple example More Examples Thanks & Credits Q & A
|
10.5446/52061 (DOI)
|
Hello everyone and welcome to my talk, Puzzling the Phone and the iPhone. The phone and the iPhone is the component that receives SMS, sends SMS, receives phone calls, makes phone calls and also manages your internet connection when you are not on Wi-Fi. However, you might now wonder what is it exactly? So I'm talking about Comcenter and fuzzing it via the QMI and ARI interfaces, but this is a bit too technical for most of you. So I will first introduce you to the concept of fuzzing in general and protocol fuzzing before I dive into further details. For those of you who have not yet heard about the concept of fuzzing, you can send a lot of random messages and then try to test the security of an interface with this. And in this video, you can see how I sent SMS over a three-day-based fuzzer with something like 400 fast cases per second. And then the IMH and Perceives them, catches them and sends a couple of them also to the smartphone. Let's dive into the motivation and an explanation to the attacker model. So if you look into a modern smartphone, you have two components. If you want to show it in a simple way, so a boost of all the RST hardware part with a lot of chips. And then on top of this, there's an operating system and applications. However, it's not as simple as this because even those chips are so complex that they run their own little real-time operating systems to preprocess data. So this means that you can even get code execution on such a chip. And this is usually much easier than in the operating system itself because those chips cannot have that many mitigations. However, so what do you even do if you have code execution in such a chip? So if you are in a baseband chip, then one escalation strategy from the chip towards the operating system might be to manipulate traffic in the browser. However, I don't think that this is the case because if you look at the zero-diam price list, then actually the browser exploits are much more expensive. So it's probably not done like this. And there must be other ways to escalate from this chip into the operating system. In general, the traffic manipulation is something that you can always do in wireless transmission or also on the internet. So if you look how those systems work these days, so you have something like the internet in general that serves websites and so on and also the core network of your mobile provider. And there are many, many ways to manipulate traffic, either if you are a state-level actor who is able to have something in the core network or just by sending around websites or modifying websites. And then there is the base station subsystem. There might also be dragons, we don't know exactly. And of course, there are over-the-air transmissions. And wireless transmissions are very special because if there is something just slightly broken in the encryption, for example, then it's also possible to manipulate traffic there if you have a software-defined radio, for example. So all of this could be attacked to manipulate traffic. And I don't think that for this one would craft a baseband exploit. Already in 2014 at the CCC, there have been two talks about the SS7 protocol, which is run in the core network and is actually meant to connect different mobile carriers to each other. And this can also be used to intercept phone calls, for example. And this also has been exploited recently. So even though there have been some mitigations, et cetera, since then, it's still exploited for the same purpose to spy on people. So really, really, really, basement exploits only exist to escalate from the chip into the operating system. But now the question is, what are the strategies? So if it's not via the browser, what else could it be? So the browser, really, I'm sure it is not because also you need to have some traffic and so on. It doesn't really work instant. You need to visit a website to replace traffic on the website and so on. So yeah, there must be something else. So if you are on the chip with remote code execution and want to go into the operating system, there is some interface. And this means that something in those interfaces needs to be exploitable so that you can escalate the privileges from the chip into the system. And also, those interfaces are very interesting from a reverse engineer's perspective. So even if you don't want to attack anything, just understanding how they work is also a goal of this work. So for example, if you have a basement debug profile, you can just download this onto your iPhone and then you open your iDevice syslog. You can already see a lot of management messages that are exchanged between the chip and the iPhone. And if you have a jailbreak and Frida, you can even inject packets or modify packets to change the behavior of your modem. But if you want to start to work on such a thing, the question is like, how do we even start? Where do you start? Well, browsing is actually a method that can be used to understand such an interface. So initially, if you identified an interface just to check if it is the correct interface, so if it really changes behavior, if you flip some bytes, but also how powerful this interface is, so what are the features, what breaks instantly. And if things break, also, you can check if the whole interface has been designed with security in mind. Now let's start with an introduction to wireless protocol fuzzing. This will also be a short rant because the current tooling for fuzzing is usually not made to fuzz a protocol. So let's start with a very simple fuzzer, a fuzzer that is just an image parser. So you browse your smartphone for unicorn pictures or PNGs or JPEGs, and then you send them to the image parser. And in the image parser, you might be able to observe which functions are executed in the form of basic blocks. And then during this initialization, the image parser can even report which bytes were executed and you can just start the image parser again and again with different images and get this basic block coverage back. In the next step, you can then combine existing images or flip bits in these images and send them to the image parser and again observe the coverage. Most of the time, it won't generate any new coverage. So you just say you are not looking into this image in particular, but sometimes you might get new coverage like here and then you add this image to your corpus. So over time, you will increase your corpus and increase your coverage. Another method can be if you know how exactly an image format looks like. So you might know the JPEG specification and because of this, you could just generate images that are more or less specification compliant and they look more artificial like this. So you just generate images and send them to the image parser. And at some point, you might observe a crash. So that also depends again on your harnessing. Maybe you can observe basic blocks. Maybe you can just observe crashes. And then you know at which image you had a crash. You might even be able to combine these two approaches just depending on what you know about your input and how you can harness your target. Now it looks a bit different for a protocol. So in a protocol, you can have a very complex state. Let's say you are in an active phone call or just something like you receive an SMS. You can actually force the iPhone to receive SMS if you have a second iPhone and send SMS. And then during the fuzzing, you can replace some bits and bytes like this. And then you would have a modification. So this is a very simple approach and it preserves the state. So no matter how complex the thing is that you're currently doing, it's very simple to flip a bit here and there in an active interaction. But it's also a bit annoying because you need to have these active phone calls, etc. So something that's more efficient is injection. So you would observe certain messages and then just send them again. And then you don't even need this second phone to make calls, etc. You can just send a lot, a lot, a lot of data. And this is the effect when your iPhone goes to the dim or something because of all the notifications and all the data that is sent. But the issue here is that this does not preserve state. So there might be actions where the iPhone requests something that is then answered. So the iPhone might request, for example, a date and only then the chip would reply with a date and only then the iPhone would accept the date. But it's still very interesting to do this. So even though you cannot reach certain states because you can do this without a SIM card and you can do this very, very fast. So just to summarize the issues here, if you pass a virus protocol, you can have very significant state differences and just injecting packets cannot reach all states. The fact that you cannot reach all states also shows in very simple stuff like a trace replay. So a trace is something that you record. So let's say I have an active phone call, I record all the packets. And I can also observe the coverage. So with Frida, you can observe coverage on an iPhone while the phone call is active. And then in a second step, you would do some injection. But the only thing that you can inject are the packets sent from the baseband to the smartphone, not the opposite direction. And this resides usually in much less coverage. So you are missing a lot of things due to a missing state. And even worse, if you do the same thing again, you might be in a different state and you might observe a different coverage. So you do the exact same thing, but you get different coverage. So even replaying recorded messages results in less or inconsistent coverage. Anyway, let's take a look into an injection example. So in this video, you can see how I'm in the Unicor network on an iPhone 8, which has obviously 5G, but also does a lot of fuzzing. And in the fuzzing, what is interesting is that you might do a lot of states in a combination that are not usually possible, like you have a lost network connection, while you have to confirm a pin or you have a network connection during this, et cetera. So to summarize my rant, some states cannot be reached solely by injecting packets. So even if we have a very good corpus and do very good mutations, we might just miss 80% of the code, but we can just fuzz anyway. But we need to keep in mind that some stuff is just not fuzzable. We looked into a lot of virus protocols at CMO in the past, so it's worth to also consider which tooling we already had available for fuzzing protocols. The most advanced tooling that we have is Frankenstein and it's built by Jan. So what Jan did is he emulated a firmware and attached it to a virtual modem and also a Linux host. For this, he first looked into the firmware that's here and we had some partial symbols for this and also some information about registers. Then Frankenstein is actually taking a snapshot that you can see here, including some of those registers of the modem. And with this, you can build a virtual modem and fuzz input as if it would come over the air. Then Frankenstein also emulates the whole firmware, including thread switches, so it gets into very complex states and it's even attached to a Linux host. So it also fuzzes a bit of Linux while actually fuzzing the firmware itself. Now the issue with this is that basement firmware is usually a 10 times the size of Bluetooth firmware or even more and we don't have any symbols for this. So it's a lot of work to customize this. And even if one would do all those steps and put all the work into this, it's only, so to say, code execution in the basement. It's not yet a privileged escalation into the operating system. The next interesting tooling was built by Stefan and what Stefan did, he built a fuzzle based on detrace and AFL. Detrace is a tool that can provide function level coverage in the macOS kernel and user space. With some modifications, you can even get basic block coverage in the user space, which is required for AFL to work. So in the end, you have AFL or AFL++ as a fuzzle on any program on macOS. It's even slightly faster than Frida, at least the version that he used. And he gets a couple of thousand fuzz cases per second, even on a very old iMac. So yeah, in our lab, we just had an old iMac 2012 for this and it works on this. But the issue is that Wi-Fi and Bluetooth, which he fuzz are very complex protocols, so he couldn't find any new bugs with AFL. And also in the kernel space, you only get this function level coverage. He still, despite not finding any bugs in Wi-Fi or Bluetooth, got a CVE because detrace also has bugs. So at least some finding. But on iOS, this is not supported out of the box. So it might be possible to get detrace working with some tweaks, but it's a lot of work. So probably it's easier to just use Frida in the iOS user space. Also during this, so while Stefan was building all this very advanced tooling, Wang Yu found issues in the macOS Bluetooth and Wi-Fi drivers. And so he was very, very successful in comparison to us. That's really a pity. And I think what he did is much better state modeling of how the messages interact and what is important to reach certain functions. So what is still left? So usually fuzzing the baseband means that you need to modify firmware or also emulate firmware. You need to implement very complex specifications on a subtly defined radio if you want to fuzz over the air or build proof of concepts. And for everything that's somewhat proprietary, you need to do protocol reverse engineering. So you can spend a lot of time and money just to do very, very basic research. Or well, you can also use Frida. So you can fuzz with Frida. And all you need to do for this is write a few lines of code in JavaScript. So I kid you not, the option is Frida. Dennis was the first in our team who has advised us as a thesis student who built a Frida-based fuzzer and it's called Toothpicker. It's based on Frida and Radamsa. So what it does is, well, it hooks into these connections or into the protocols of the Bluetooth daemon, you could also think of this upper part here as a one block. So the protocols are implemented in the Bluetooth daemon, but we want to fuzz certain protocol handlers. And to increase the coverage, he creates a virtual connection. So virtual connection holds a connection and pretends to the Bluetooth daemon that there would be an active connection to a device. And of course, the chip would then say, I don't know anything about this connection. So there are also some abstractions in here so that the connection is not terminated. So that's a very simple tool, but it really found a lot of bugs and issues. And even there were some issues in the protocols themselves that also apply to macOS. So it's not just iOS bugs, but also protocol bugs in macOS that Dennis found. And this really got me thinking because two picker runs with only 20 fuzz cases per second. So it's really, really slow. And we were still able to find Bluetooth vulnerabilities at this speed. So why is this? So first of all, if you try to fuzz Bluetooth over the air, then the over the air connections are terminated after something like five invalid packets. So over the air fuzzing is really, really inefficient and with Frida you can actually patch this function so it's gone. Then the virtual connections are very important factor. So they are really, really important for having coverage. It's still a lot of coverage that we miss during your play and fuzzing. But yeah, so it's really an advantage compared to the other fuzzing approaches where you just inject packets. And in addition, there is an issue here because if you have a virtual connection, it might be that this virtual connection triggers behavior that you cannot reproduce over the air. So that means that everything that you find, you need also to confirm that it works over the air. At least inconsistent coverage is also fixed in Toothpicker because Toothpicker replace all packets five times in a row. But the issue here is that it also means that if you have a sequence of packets that is like generating a certain bug, so you need multiple packets, this is nothing that the mutator is aware of and also nothing that's locked properly in Toothpicker. And because of this, I got a bit anxious like, so yeah, maybe we missed a lot of things. So once I got the intuition that we are actually missing certain state information, I had the idea to replace bytes in active connections. And this is one project that you can see on a keyboard. So I'm just replacing bytes on keyboard input and see what happens. And I let this run for a couple of weeks, also for different protocols and so on to see if there are further bugs or not that we didn't find previously. So here you can see the same for airports with SCO and then they produce some crack sounds for the replaced bytes. It's even worse for ACL, so actual music because then you can hear very noisy chirps. I let this further run for multiple weeks and it didn't find any bugs that Toothpicker hadn't discovered before. So I think the reason for this is that I mainly passed in active connections like the one with the audio or the keyboard, but I only passed a few active pairings because this requires me to actually perform those pairings by hand. So nothing really interesting. The only bad thing that I could produce with this but not worth a CVE is that the sound quality of my airports is now really, really bad. Well, okay, and also the Broadcom chips on iOS don't check the UI length, but that's not that bad. So I mean, if you consider that they removed the right RAM recently, then you might now still be able to write into the RAM via UI, buffer overflows, but yeah, nothing too interesting. So after all of this, I asked myself what is still left for fuzzing if we cannot find a new Bluetooth or Wi-Fi box? Well, the iPhone baseband or actually the iPhone basebands because there are two. The first variant of iPhone baseband that you can get are the Qualcomm chips and they are in the US devices. They use the Qualcomm MSM interface and this interface comes with some documentation and there are even open source implementations for it. So it's something that's probably easy to understand and easy to pass. On the other hand, in almost all devices that I had on my table were Intel chips. Intel has been recently bought by Apple at least the part that does the baseband chips and these are the chips in the European devices. That's the reason why almost all my devices had Intel chips and they use a special protocol. It's called Apple Remote Invocation and if you search for this on the internet, I even checked it like just today, there are no Google hits at all. So it really hasn't been researched before, at least not publicly. It's completely undocumented and it's a very custom interface. So it's not even used for Android. It's really an interface just for Apple. The component that we are going to fuzz in the following is Comcenter. So Comcenter is the equivalent of, for example, the Bluetooth or Wi-Fi Demon, but for Telephony. It's sandboxed as the user wireless, but it comes with a lot of XPC interfaces and this is something that we will also see later in the fuzzing results. The next part is that there are two flavors of libraries. So depending on if you have a Qualcomm or an Intel chip, different libraries will be used before certain actions or data actually is then processed by the Comcenter itself. So we have different code paths here. But all of this runs in user space and this means that both libraries can be hooked with Frida and can be fuzz with Frida. So that's very interesting. There's still a lot of stuff that goes on in the kernel, so what you can see here is that Qmai and Ari have some management information that is sent to Comcenter, but they don't contain the raw network or audio data. So they don't contain your phone call. They don't contain your website that you are opening. And the next issue is that Qmai and Ari are not directly sent over the air, but what is sent over the air are normal baseband interactions and these generate Qmai and Ari messages. So there's still some section in between, but of course there are now two ways. Either you have interaction that you can do over the air that is causing Ari and Qmai messages directly that are something that causes an issue in the upper layers or you might have this full exploit chain requirement that you first need to exploit the chip over the air and then from the chip break the interface into the Comcenter. Now Qmai, the code has a lot of assertions. So it's really asserting everything about the protocol, the length, the TLV format and so on. And if anything goes wrong, it really terminates Comcenter. So if you just send one invalid packet, Comcenter is terminated. This doesn't matter a lot because if your protocol is stable and you usually don't send any invalid packets, then you know an attack is ongoing. So it's valid to terminate the Comcenter. And furthermore, it doesn't matter a lot to the user. So the worst thing that happens when Comcenter crashes, for example, where you have an active phone call, it's just that the phone call gets lost or your LTE connection is reestablished. So you don't really notice it. It just feels like your internet connection breaks for a short moment. In contrast, there is the Ari protocol and this is a Pyrza that works just very, very, very different. So whatever it's getting, it just pyrzes it and it doesn't terminate Comcenter. So you can send many, many, many fancy things and it just continues, continues, continues because the developers were probably very, very happy once they got their special protocol for Apple working and then they never touched it again. But what does it look like? So it has a very basic format also with some TLEs. And the first thing that I noticed when I fasted is that in the iDevices log, it always complained about this sequence number being wrong. So it just said, I expected to follow up sequence number so and so. So I started to fix this. And if you open it in Ida, you can see that the range that is expected is between 0 and 7FF hexadecimal. So you know at least the range. And then it gets weird. So the sequence number is spread over three different bytes in single bits and shifted around and so on and it's not even continuous. So very weird code. Probably they just added those sequence numbers to confirm some race conditions or something I really don't know or out of order packets. Something weird going on there. But I wrote the code, I fixed the sequence number and then during the replay of packets I noticed well, it doesn't even matter. It doesn't matter if your sequence number is valid or invalid, parsing continues and even worse, even packets with the wrong sequence number are parsed. Probably because otherwise there would be too many issues because the protocol implementation is too buggy. And there are also a couple of other things. So for example, if you send the first four magic bytes wrong or a wrong length or something, then the packet is potentially ignored. But the parsing continues and commCenter is not terminated like in QMI. Since it's a proprietary protocol, there's currently no tooling available, but Tobias is working on a Weischach sector and once he finishes his thesis, it will also be publicly released. So you need to wait a while, but then you will have a tool for this. Anyway, let's also talk about parsing this. So I would not recommend to pass this because you might pre-cure device or at least get into Weischach state. So just don't do this on your productive iPhone. I mean, obviously I know what I'm doing. So yeah, just parsing packets, right? But I'm not so sure about what exactly I'm doing. So the only direction that I pass is from the baseband to the iPhone here, not the opposite direction. So I hopefully do prevent anything weird on the chip, right? But the iPhone might still answer with something invalid and this might confuse the baseband or cause other crashes. And so I actually had to call for help like me, me, me, me, me. I brought my iPhone. I mean, just one of my research devices, but still, so it booted into PongoOS, but no longer into iOS and it didn't tell me any debug message that was useful. Well, it turns out at least on the Qualcomm chips and that's where this happens. It just boots after a couple of hours again, but before it's just entering a boot loop. And on the Intel iPhones, I also almost break the 918, but luckily it didn't completely break. So the issue there is if you enable the baseband debug profile, then it writes a lot of stuff to the ISTP files. So that's some debug format of Intel. And every few minutes, it's just create something like 500 megabytes of data, at least on the iPhone 8. On the newer iPhones, this debug format is a bit shorter, so it doesn't create as much data, but still a lot. And if you don't delete this regularly, then of course your disk will be full and an iPhone behaves quite strange if it has a full disk. So you can still interact with the user interface, but you can no longer delete photos because deleting a photo, it seems it just needs some file interaction. Also you can no longer log in with SSH, which is also an issue because it somehow seems to create a file when logging in, so you can no longer delete any files. And I was just rebooting the iPhone after trying a couple of things, and luckily it came back and deleted some files and I was able to log in and remove the baseband logs, but be careful when doing this. And of course, all the iPhones are very confused from the fudging, so they really lose everything about their identity and location and they want to be activated again. So here you can see a smartphone that lost its location and really wants to be activated, activated, activated. During SMS fuzzing you might even get LESH messages and if you click on the head menu on the direct theme, they are displayed black on gray, so probably nobody ever tested it. Also great if you have a locked iPhone, you can still display SIM menus and SIM messages on top of the lock. Okay, so I guess I have to revise my first instruction, so pass this, really, really fast this. It's a lot of fun, maybe just not on your primary device, but you will enjoy fuzzing these interfaces. But first of all, you obviously need to build a fuzzer, so how do you build a fuzzer? The first fuzzer that I used was the one that I also used for Bluetooth that just uses the existing byte stream of the protocol and then flips single bits and bytes. So it has this high state awareness, but it also means that like some kind of monkey I was just calling myself, writing SMS to myself, enabling flight mode, everything that you could just imagine and it's a very boring task. But it also found very fancy bugs that I couldn't reprove with the other fuzzers yet, because it can reach states that just injection of packets cannot reach. So at least it was quite successful. And well, I fast-whiped this for something like three days and it already found bugs. That's very different to the Bluetooth fuzzer, so there seem to be more bugs in Comcenter. And so I just wrote to Apple, yeah, hi there, I wrote this really, really ugly 10 lines of code fuzzer and see what it found, awesome, awesome, awesome and crash logs are attached. And obviously this is simple to reproduce because I only fast for three days, got most of these crashes multiple times. So here you go and draw my fuzzer. And this was probably quite stupid because it's not that simple. So it's really not easy to reproduce the crashes. Most of all, well, of course, the script is so generic that it runs on all iPhones with an Intel chip. So no matter if I take an iPhone 7 or an iPhone 11, it will just work. But the crash logs that you get are very different depending on if you fast on pre-A12, so iPhone 7 and 8 or on the later versions like the iPhone 11 and SE2. So you cannot reproduce the same crash logs that easy. And also it depends a lot on the SIM, so even on a passive iPhone, if you don't do any active phone calls and so on, you would get different results. So I started my parsing actually with a Singaporean SIM card without any data contract or phone contract on top of it. And it already found a couple of things. But yeah, it might just behave very different on just a slightly different configuration. Anyway, let's listen to a null pointer that it found. And this null pointer has been fixed in iOS 14.2 and it's in the audio controller. So you can hear some loop going on there. What you can see here is me calling the Deutsche Telekom and so on, so they have this variable in text. Guten Tag und herzlich willkommen beim Kundenservice der Telekom. And then I call again and have a crash. And now let's listen to the crash. First what is that? In fact, I also recorded another one. So this one is with Aldi Talk. And now let's listen to a special offer by Aldi Talk in 3, 2, 1. Since these first parsing results were very promising, I decided to use the latest toothpicker version and extend it for parsing Arri. And I called it ice pickery because the little chips are also called ice. So I just cloned Dennis' latest toothpicker alpha, which is very, very unstable, but this one actually runs on the iPhone locally without any interaction with MacOS or Linux. So it doesn't need to exchange any payload via USB. And also it's using AFL++, which is a much faster mutated and radamsa. So from a speed consideration, this is a much better design. However, AFL++ didn't turn out to be the best fuzzer for a protocol. So most of the time it actually spent trying to brute-pause the first magic bytes, the first four bytes, because it tries to shorten inputs. It's also not aware of something like a packet order. So it was just brute-posing those first four bytes. And well, the next issue is that for some reason, if the first four bytes are invalid, the Arri parser slows down a lot. So I was suddenly down to something like less than 10 fast cases per second. And also there is no awareness of the ice picker in this case of the Arri host state. So Arri sometimes shuts down this interface if it thinks that something is very invalid. And the parser would just continue. So I looked into the ID by syslog after the parser couldn't find any new coverage for more than six hours. And I was wondering, what is the issue here? Is the implementation wrong or is it the fuzzer? And it really looks like the fuzzer is producing inputs that are not good for protocol fuzzing. Of course, this is stuff that you can optimize. So AFL++ can do a lot here. So you can tell it a bit how the protocol looks like and also get it to not brute-force the first four magic bytes. But for this, I would have to recompile the whole thing. And it was something that compiled on Dennis machine, but it didn't compile on my machine because I had my Xcode beta in a weird state. And well, of course, some of you might now say, yeah, just download and install a new Xcode. But this takes so long that actually writing the next fuzzer seemed to be easier. Still, this variant of ice picker was interesting to me because it was the first time when I saw that the fuzzer in the situation works, including coverage. And also my replay works across my multiple iPhone version. So my corpus collected on an iPhone SE 2 was replayable on an iPhone 7. So it was not useless in that sense, but I just decided to not use this configuration. So I just wrote a very simple fuzzer again, and I didn't do the porting of everything to run locally on iOS. I just kept the design a bit simpler or at least easier to code and had my fuzzer running on Linux and then using only freedom iOS. It cannot reproduce all the states and crashes that I observed with my very first fuzzer, but most crashes could be reproduced. I didn't do any coverage. I didn't do any smart mutations, just very stupid mutations. And basically I just did a very blind injection. But this was super fast. So instead of the 20 fast cases per second, I already had something like 400 fast cases per second on an iPhone 7, which was about the same speed or even faster than the AFL++ variant. And I can at least correct the length, field sequence number and so on before injecting the payload. Since it doesn't do that great mutations, at least I need to collect a good corpus with many sims, many cores. And I'm also logging the packet order with this. So it's at least a bear of a packet sequence in the sense of I can reproduce the sequence later on. I had this fuzzer running on a couple of iPhones in parallel for multiple weeks, and it found a lot of interesting crashes. So that's my go-to fuzzer. I still wanted to confirm that not collecting coverage wasn't an issue. So I also cloned a publicly released tooth picker, which definitely finds new coverage. And it's using the Radamsa Mutator, which is very, very slow, but it does a bit smarter mutations at least in terms of protocol fuzzing. It's still only a bear of single packets, and it's only using the same packet five times in the road, confirm coverage, et cetera. And also an issue is that it cannot catch a lot of the crashes of comms sentry. So it happens quite often that comms send the crashes. And then if you cannot catch the crash with Frida and everything crashes, then you need to start the fuzzer again. But you also need to delete the files in the corpus that led to the crash, because otherwise you would just run into the same crash very fast. So it needs a lot of babysitting. I also had a droning for a couple of weeks, but sadly it didn't find any new crashes. So at least I can be sure that fuzzing much slower, but with coverage is not any improvement. Still the mutations it creates are quite useful, as you can see in the following. So you can even see this phone number scrolling here and so on, so it generated a very long phone number correctly into some TLV structure here. And that's quite interesting to see. So this is something that you could not reach by just flipping bits and bytes. There's one big shortcoming that all of these fuzzers have, including the initial toothpicker, which is they don't have any kind of memory sanitization. So the framework that you would usually use in user space on iOS is the Manlox Stack Logging Framework. I even got this running for comms center, so it's a bit of command line juggling, but in the end you can enable Manlox Stack Logging also for comms center. The issue here is that it increases the memory usage a lot, and even if you configure comms center to have a higher memory allowance, it is so high that it's just immediately killed by the out-of-memory killer. So this doesn't work. Then there is also the PgManlog. It doesn't exist for iOS, it's just exist in Xcode. I got one of the Xcode libraries running on one of my iPhones. I have no idea if this is an expected configuration or not. At least I could execute smaller programs. And then when you use this on comms center, it just crashes with a lipgManlog error on parsing some of the configuration files very, very early when starting the comms center. So all of this didn't work, and this also means that the fuzzer cannot find certain bug types or crashes much later when encountering bugs. So all of the fuzzers that I created are not perfect, but at least they found a lot of different crashes. Let's look into this. I mean the first obvious number that you see here is the 42, so I stopped fuzzing after 42 crashes. At least crashes that I think are individual crashes and that are not caused by Frida, so I tried to filter out Frida crashes. And this corresponds to the total amount of crashes, but only some of them are replayable by either one or multiple packets. And for the replayable crashes, I can also check if they were fixed in recent iOS versions, so the most recent iOS 14.3 or not. Then I also marked two colors here because there is the Intel libraries, but there is also the Qualcomm libraries. And for the Qualcomm libraries, I didn't spend as much time fuzzing because I have less Qualcomm phones, but also all the asserts in the code prevent a lot of issues from being reached. So the libraries themselves have less issues and also within Comcenter less of the code that has improper state handling is reached. The Location Demon is marked also with a big gray box here because the Location Demon is similar to the Comcenter using some of the raw packet inputs and parses them, so it has special parses for Qualcomm and Intel. And it's also an interesting target because of this. Other than this, I got really a lot, a lot, a lot of different demons crashing. Some of them even with replayable behavior. So for example, there is the BioDest radio manager demon that you can just crash via one Intel packet, but this has been fixed. And then there is one interesting crash that I actually got via Qualcomm and Intel libraries. So in the mobile Internet sharing demon, this also has been fixed. And some of the crashes only happened via Qualcomm, but I'm not sure if that's like a Qualcomm specific thing or it's just randomness of the father. So the mobile Internet sharing demon has an issue where it accesses memory at configuration strings. So there's different strings in this memory address. And I found this quite early, but I was not aware of the fact that so many other demons are actually crashing when I fast-com center. So I didn't look into this in the very beginning. And when I reported it to Apple, they said, yeah, yeah, we already know about this and we fixed it in a better prior to report. So sadly, nothing that I got a CVE for. Another interesting crash is in the cell monitor, but only of the Intel library. The cell monitor is something that is running passively in the background all the time and it passes, for example, GSM and UMTS cell information. I already found this on the Singaporean Sim without any active data plan in my very first round of fuzzing and reported it back then to Apple. I don't know if it's triggerable over the air or not. So I guess it's something that you first need chip code execution for. And it has been fixed in iOS 14.2. And I wrote a lot of emails with Apple because I thought that they didn't fix it. And the reason for this is that both the GSM cell info and the UMTS cell info function, when they parse data, they have two different bugs. So I still got crashes in the same functions. And I thought, okay, same function, still a crash. The bug is not fixed. But actually it's very high quality code and it's just multiple bugs per function. And there is even one more issue in the cell monitor, even though I think the remaining bugs are very simple crashes. So nothing that could be exploitable at all, but still hints to the great code quality. And the same story is that there are even more bugs to be fixed. So most of them are probably just stability improvements, but some of them are still interesting. So let's see how this goes. So since I told that it's a very simple fuzzer, some of you might have already started coding those 10 lines of code for fuzzing while I continue talking and scrapped their old iPhones that they are willing to lose if something goes wrong. So how can we actually build a fuzzer that is performant and replicates some of the bugs that I found just within a day? Let's take a look. When you do freedom fuzzing, a lot of the stuff that you do is limited by the processing power of the iPhone. So your iPhone will get very, very, very hot and it might even drain more battery than it can get via the USB port. So it might even discharge by fuzzing and performance is really key. So you need to identify bottlenecks. So yeah, I said toothpicker or ice picker. The initial version is just 20 fast cases per second. And you can tune this to something like 20,000 fast cases per second. So I already told that I tuned it to something like 400 or 500 fast cases per second, but why the 20,000? So initially a student of mine did some fuzzing in a very different part of it and said on my iPhone 6S, it's running with 20,000 fast cases per second. I was like, no way, no way. But actually you can do this. So this depends a lot on the freedom design. The first variant, how most freedom scripts are written is that you have some Python script that runs on Linux or Mac OS and it has a couple of functions that you can see here. So first of all, it has this on message callback. So this on message callback is something that we need later and we just register it to our free descript. The free descript I'm going to show you in a second and you load the script and the script can then even call functions on your iPhone. For this, you load a second script on your iPhone. So this is JavaScript injected into the iOS target process and it can, for example, use the send function to send something back to the on message function and it can export functions via IPC. So you can then call them. All this happens via JSON and so it needs serialization and deserialization, which means you cannot send hex data or binary data directly. So you have a hex string that you encode into JSON, which is then parsed as binary data. And also it's all via USB. So you also have to speed limitation by USB. And of course, if you use the free to see bindings locally on the iOS smartphone, it is a bit faster, but it's still not perfect. So the more you can prevent from this JSON part and USB part, the better. The actual pattern looks a bit like this. So you are in the lib-iris server. So that's the slowest library from the diagram before. And then you define this inbound message callback function, which has two arguments, which are the payload and the length. So this looks a bit cryptic, but that's basically it. And then you can, but you don't have to add this interceptor here because you might want to fix your sequence number or add basic block coverage to your father, et cetera. So this is also done there. And then you can just call this inbound message callback of Ari and send Ari payloads. So this already can be very different. So if you now call this wire RPC, export via a Python script on your laptop, you can reach something like 500 fast cases per second if you inject SMS, which are quite processing intensive payload. Or if you just do the same thing, and if you just run this inbound message callback in a loop locally with JavaScript without any external Python script, then you would get 22,000 fast cases per second on the very same device. So this is the speed difference that the JSON serialization, the serialization and the USB in between make. So I did a few more measurements. And sadly on the iPhone 8, there is a bug that prevents me from collecting coverage. But what you can see is so the first part here is if you have just a bit flipper in a loop that calls the target function, you can get 17,000 fast cases per second on an iPhone 7. As soon as you start collecting basic block coverage, not processing it, just collecting, you drop to 250 fast cases per second. So you need to ask yourself if your puzzle gets really that much better from collecting coverage. And another thing is that's this line above. So if you just print the packet that you first or injected and print this via Python to your laptop, you also have a huge slowdown, which is not as large as the coverage slowdown. But still you can see every print and every sending of a message in between a Python script and JavaScript takes a lot of time. Now if you have this remote SMS injection that I had before, then you drop to 400 fast cases per second. There's a blind injection without any coverage. If you collect coverage but don't process coverage, then you are down to 100 fast cases per second. So for the initial toothpick design, this would be the optimum. But because the radams and mutators very slow and because you also need to process the coverage information, et cetera, that's down to 20 fast cases per second. So this is the comparison here and now you can imagine why collecting coverage probably isn't always useful and why also having your laptop calculating better mutation because it's easier to write a mutator there than directly in JavaScript is not always the best idea. So let's watch one last demo video. What you can see here is when you try to delete SMS after all of the fuzzing, it really doesn't break neither by the settings nor via the SMS app. So you really need to reset your iPhone after fuzzing it for too long. No other chance than this to delete the messages. With this, we are already at the end of this talk. But of course, there will be a Q&A session and if you missed the Q&A session, you can also ask me on Twitter or write me an email. Thanks for watching. We'll see you in the next video.
|
How secure is the interface between baseband chips and iOS? While this interface should protect against escalations from the baseband into operating system components, its implementation is full of bugs. Fuzzing this interface is not only relevant to security, but also results in various funny effects, since the iPhone looses information about its identity and location, and eventually ends up in a state with a few thousand unread SMS that can no longer be deleted. The baseband chip is the phone within a smartphone. While users might disable Bluetooth or not join untrusted Wi-Fis to increase security, mobile data and phone calls are always on. Some baseband features remain enabled even without an active data pass or SIM card installed. Thus, baseband chips are a popular target for Remote Code Execution (RCE) attacks. Launched over-the-air, they do not leave any trace on intermediate servers. With recently published tools, emulating and fuzzing baseband firmware has become accessible and gained a lot of attention by security researchers. Yet, a critical part of an exploit chain remains escalating from the baseband chip into the operating system. An attacker with code execution in the baseband chip could modify network traffic and escalate with a Web browser exploit. However, wireless communication is already susceptible to manipulation without a baseband exploit due to missing user data integrity protection, as shown by the IMP4GT attacks on LTE. Moreover, network traffic manipulation still requires the attacker to wait until their target creates traffic in the context of a Web browser. More stealthy attacks can be achieved via baseband activity that does not require any user interaction, such as retrieving information about nearby base stations. On an iPhone, the interface forwarding this information comes in two variants for Intel and Qualcomm baseband chips. Both interfaces interact with CommCenter, which is the phone component of the iPhone that parses and forwards phone calls, SMS, and more. This talk demonstrates how to fuzz these interfaces as well as various challenges and solutions in fuzzing these. The baseband interface is such an integral part of iOS that fuzzing results are not limited to finding issues within CommCenter but also led to discovery of bugs in various daemons connecting to it. Some of these issues might also be triggered by a rouge base station, though, fuzzing the baseband interface locally results in higher performance and more control. Fuzzing the baseband interface demonstrates fuzzing in a way that is easy to understand even for those who are not familiar with security research. The iPhone under test receives SMS so fast that it will not finish playingthe Dimm notification sound before the next SMS arrives, and, thus, this actually sounds like D-d-d-di-di-d-d-di-d-di-d-di-d-dimm! The sound effect when the audio controller hangs up is also quite fun. Exposure notifications are no longer available due to the location change, since the iPhone uses baseband information to determine its region. And due to all the confusion about its identity, it regularly asks for activation. Besides security analysis, understanding the baseband interface means that it can also be controlled on jailbroken iPhones. Moreover, passive observations are possible on non-jailbroken devices with a baseband debug profile. This opens up these devices for new wireless research opportunities.
|
10.5446/52065 (DOI)
|
A European regulation claimed to guarantee net neutrality and the right for users to choose their own routers. But is the battle for net neutrality over yet? No, it is not. In this next talk, Lukas Lasotta, who works for the FSFE, will explain how these new national implementations of the regulation will put these rights at risk. I am looking forward to his overview over recent successes for router freedom in the EU and what challenges are coming in the future. Lukas, we are really happy to have you. Okay, thank you very much. Hello, Victor, and hello, everybody. I am very pleased to be here today, and I would like to thank the organizers very much for the opportunity to talk about a very interesting topic, not just a legal board topic, but certainly something that concerns all of us. So thanks, Victor, for the nice introduction. Today, my talk will be about net neutrality in Europe and the continuation of the struggle, continuation of the debates that we are having right now. My name is Lukas Lasotta. I am deputy legal coordinator of the Free Software Foundation Europe, and also I am a fellow researcher at the Hubot University, where I am working also in the field of IT law. Feel free to reach out. There is my link about me. So let's get started. Today, I want to talk about router freedom, an activity that we conduct in the Free Software Foundation Europe for already several years. This talk has gained more traction and become very important in the last years. We will talk about what is router freedom, what is open net internet and device neutrality, and of course, why router freedom today is in danger in Europe, and what you can do to help. So how you can join our advocate, so how you can join us in our activity. So everybody in Europe, every citizen has his or her right, guaranteed to use their own equipment to connect to the internet. First, I would like to say that, well, internet, I'm sorry, internet access is a human right. So we do everything today in the internet, and this year has shown that not only our normal interactions, but our not our personal interactions, but also our work interactions, everything that we do are connected to the internet. So we must guarantee or empower users to have access to the internet through their own means, through their own equipment. So guaranteeing that they're right to use their own equipment in order to connect to the internet is a must. And since 2015, this right is guaranteed by European regulation. So through the net neutrality regulation, although the official name is called open internet regulation, guarantees end users, consumer citizens, for the far freedoms of net neutrality, the freedom of content, freedom of application, they want to use freedom of services and freedom of devices. Well, we use different devices to connect to the internet. When we are at home, we use home home routers. When we are in the street, we use our own smartphone for an example. And although it's quite intuitive that nobody today tells that the ISP owns your own telephone and you can use your own telephone, your own smartphone to connect to the internet, it's not like that when it comes to the router, the home router that you use. So in fact, we have seen that users are, although it is guaranteed by law, users have a lot of problems to use their own router to connect to the internet. And of course, if you have your own router, if you have the free choice of your own equipment, not only is compliant with the law, but there is a lot of benefits. And here I would just like to say some of them. If we protect router freedom, so the type of equipment that we use to connect to the internet, of course, then we are protecting freedom of choice. We are protecting privacy, that protection of the users, competition in compatibility in the equipment market, and of course, security because users can update their own software to make their equipment more secure. Yes, but therefore, the legislation protects that, but as we're going to see, it's not so straightforward to comprehend. But net neutrality and this for freedoms is just the beginning of a broader debate that is going on right now in Europe. Net neutrality guarantees that every user can access any online content or service using the device of their choice. The focus of net neutrality debate is concerned with the various network management practice that internet service providers, the network operators, should be allowed to pursue being the central gatekeepers between consumers and content providers. There is a difference between however, between open internet and net neutrality. The European net neutrality regulation from 2015, we're going to talk about that in the minute, and trying user rights to access and distribute information and content online. But it applies only to the internet service providers, which are only one link to the internet access chain. The ability to access the internet and provide content relies on a much larger chain in which other stakeholders also play an important role. Well, open internet is a broader concept. It involves regulation against discriminatory practices involving software and hardware which could impact end user's rights of choice. For example, operating system neutrality divides the manufacturers with select an operating system and a small number of popular apps that cannot be uninstalled. The same is true for app stores in the mobile market. Search neutrality, search engineers like Google or Bing may have no neutral conduct by ranking search results that relate to its on or affiliated services high in the organic and paid search results. And of course, browser, web browser neutrality. Because web browsers can also be a vehicle that allows vertical integrated companies in favor of their own services at the expense of consumers freedom of choice. Well, so we think that the debate of net neutrality is much broader when we consider disorder kind of neutrality in this other elements in the chain. And in this picture, we can see that the router stands in the end of the network between the end users domain and the public network ISP domain. And that's exactly the point where router freedom is in danger. And why router freedom a concept, a very clear concept in law is being danger right now. Well, we're going to see that there are vagal rules on the European level. National authorities are responsible for monitoring the activities of the internet service providers. They're doing the bed monitoring and the network operators, they are imposing barriers on users. So let's see the first one. So as I said, the net neutrality regulation from 2015, they provide users the right of choice to choose and to use the routers and equipment they want to connect to the internet. But in 2020, there's a new set of rules, technical rules regarding the network termination point, a definition that determines if the router and the modem should pertain to the user or to internet service providers. And Berek has put some criteria to determine who should have the property over the router, the user or the ISP. And the most controversial topic in that is that they say if there is a technological necessity, if the ISP can determine a technological necessity, the router can go directly to the infrastructure of the ISP. And right now, the EU member states, they will implement these rules. The net neutrality regulation doesn't need to be implemented because of regulation, but the guidelines on NTP will have to be implemented on EU member states. And here lies the problem because it will be easier for ISP to prove technological necessity and they can keep our routers in their own infrastructure. So besides that, these national regulatory authorities who, okay, so right now, the national regulatory authorities, they are responsible for monitoring the activity of ISP they are doing very bad and they are not imposing any type of fines or penalties on those who do not comply with the regulation, allowing users to use their own routers. And of course, ISP, they are imposing different kind of barriers for consumers to use their own routers. And we at FSFE, we divided these barriers in two types, the software barriers and the hard barriers. Of course, the hard ones are much serious because for an example, customers are forbidden to use another router by contract. ISP do not provide users the login data to the public network. They use non-standard plugs or proprietary protocols. And of course, they don't offer technical support for internet access. But there are other types of barriers, software barriers, for an example, when they don't inform the users that they have the right to use their own routers or they manipulate users through customer support in favor of their own routers. And of course, there are other types of barriers that ISP are very creative in order to impose their own equipment on users. So what you can do to help, as we said, there is very clear rules defending our rights. But right now, we are suffering. A lot of people are suffering with their network operators, the internet service providers, because they cannot use their routers. And we want to change this scenario. We want to demonstrate that in fact, our rights should be respected. And we can do that because there are very clear rules. So what you can do to help? Three steps. Contact your ISP. Ask if you can use your router. And send me an email. Tell me the results. Lucas.lasota.fse.org. And let me know what are the kind of problems you are having. And we are working there to raise awareness in Europe around router freedom and fighting for our rights. So I think we are done. My presentation is quite short. And I would be very happy to hear if there is some questions. Thank you very much. Thanks. And what people can do, I really like the strong call for action. Ask your ISP. So far, there have been no questions from the internet. But I thought that maybe when I was thinking about what you just told us, I was really interested if you could tell us, so imagine a legal situation where the router freedom in the U is guaranteed. What beautiful nice things would happen for the user. What is the vision we are working towards here? What are the really good things that would happen for users if we won this fight? Yes, absolutely. Again, I wanted to show the benefits for that. Because first, freedom of choice. Users will be completely free to choose the equipment they want. Today, for an ISP, it is very easy to impose their own router. And if we are obliged to use their router, we have problems with security because we don't know which kind of software, if it is prepared to have software running in their equipment. We don't know if the software has some bugs, and therefore we don't know which kind of software is running on the equipment. So our privacy and that protection is all compromised. And of course, when we have the possibility to choose our own equipment, it is very nice for the market because we can then provide free competition and compatibility. So we are fighting this kind of fight for router freedom. So users can enjoy these kind of benefits. Okay, thank you. There just a question came in. So the question is asking about the possible examples of these technological measures. I think the rest is missing. Do you already understand what the question means? So I think there's technical measures the ISPs are implementing or the requirements. And I think the question is asking for examples for that. Do you have any maybe? Yes, for an example. Well, a very, very true example because happened to me and to other stuff is in the FSFE. The ISP, they don't want to provide use the login data to the public network. So we tell them, look, I have my router here and I need the login data to access your network. So I can have access to the internet here at my home. And they say, I'm sorry, the login data is provided only inside the our own equipment. So we cannot provide you that you must use our own equipment. So this is one of types of barriers, one type of technical measures that ISP impose on users. And we talk like here, but it's happening all over Europe. Okay, thanks a lot. Thanks a lot for the talk. Thanks a lot for answering the questions. As always, we have the after talk discussion board where you can you guys can discuss something with Lucas and ask him additional questions. It's at all as always on discussion.rca3.ou.social. And there's nothing more to say it for me. Thanks for listening. Thanks, Lucas, for being here today. And the program will continue shortly. Thanks a lot. And thank you again, the organizers for this beautiful, very nice opportunity to be together. Thanks a lot. A pleasure.
|
Since 2013 the Free Software Foundation Europe has been fighting for the right users have to choose and use their own routers and modems. After the successful for net neutrality in Europe in 2015, users in Europe have the right to freely choose routers and modems to access the Internet. Router Freedom is the standard, empowering users to control their devices with clear advantages for security, privacy and ecological footprint. Now, challenges on national implementation of the European rules will put this right under risk. In parallel new demands are being posed against commercial practices involving software that is sold with hardware by default, for instance operating system in laptops, browsers and app stores. The talk will provide an overview on the recent successes for router freedom and the legal challenges against net neutrality in Europe.
|
10.5446/52082 (DOI)
|
All right. CWA, three simple letters. But what stands behind them is not simple at all. For various reasons, the Corona One app has been one of the most talked about digital projects of the year. And it's rather simplistic facade. There are many considerations that went into the app's design to protect its users and their data. While they might not be visible for most users, these goals had a direct influence on the software architecture. For instance, the risk calculation. Hidote to talk about some of these backend elements is one of the solution architects of the Corona One app, Thomas Klingweil. And I'm probably not the only one here at IC3, RC3, who's an active user. And I'm pretty curious to hear more about what's going on behind the scenes of the app. So without further ado, let's give a warm virtual welcome to Thomas Klingweil. Thomas, the stream is yours. Hello everybody. I'm Thomas Klingweil. And today in the session, I would like to talk about the German Corona One app and give you a little tour behind the scenes of the app development, the underlying technologies and which things are invisible to the end user, but still very important for the app itself. First, I would like to give you a short introduction to the app, the underlying architecture and the use technologies. For example, the exposure notification framework. Then I would like to have a look on the communication between the app and the backend and looking at which possible privacy threats could be found there and how we mitigated them, of course. And then I would like to dive a little bit into the risk calculation of the app to show you what it actually means if there's a red or a green screen visible to the end user. First of all, we can ask ourselves the question, what is the Corona One app actually? So here it is. This is the German Corona One app. You can download it from the app stores. And once you have on board it onto the app, you will see the following. Up here, it shows you that the exposure logging is active, which means this is the currently active app. Then you have this green card. Green means it's low risk because there have been no exposures so far. The logging has to be permanently active and it has just updated this afternoon, so everything is all right. Let's say you've just been tested at a doctor's, then you could click this button here and you get to this screen where you're able to retrieve your test result digitally. To do this, you can scan a QR code, which is on the form you will see from your doctor, and then you will get an update as soon as the test result is available. Of course, you can also get more information about the active exposure logging. When you click the button up here, then you get to this screen and there you can learn more about the transnational exposure logging because the German Corona One app is not alone. It is connected to other Corona apps of other countries within Europe, so users from other countries can meet and they will be informed mutually about possible encounters. So just to be sure, I would like to dive into the terminology of the exposure notification framework so you know what I'm talking about during this session. It all starts with a temporary exposure key, which is generated on the phone and which is valid for 24 hours. From this temporary exposure key, several things are derived. First, for example, there's the rolling proximity identifier key and the associated encrypted metadata key. This part down here, we can skip for the moment being and look at the generation of the rolling proximity identifiers. Those rolling proximity identifiers are only valid for 10 minutes each because they are regularly exchanged once the Bluetooth MAC address change takes place. So the rolling proximity identifier is basically the Bluetooth payload your phone uses when the exposure notification framework is active and broadcasting. When I say broadcasting, I mean every 250 milliseconds your phone sends out its own rolling proximity identifiers so other phones around which are scanning for signal in the air basically can catch them and store them locally. So let's look at the receiving side. This is what we see down here and now as I already mentioned, we've got those Bluetooth low energy beacon mechanics sending out those rolling proximity identifiers and they are received down here. This is all a very simplified schematic just to give you an impression of what's going on there. So now we've got those rolling proximity identifiers stored on the receiving phone and now somehow this other phone needs to find out that there has been a match. This happens by transforming those temporary exposure keys into diagnosis keys which is just renaming but as soon as someone has tested positive and a temporary exposure key is linked to a positive diagnosis, it is called diagnosis key and they are uploaded to a server and I'm drastically simplifying here. So they receive the other phone, here they are downloaded, all those diagnosis keys are extracted again and as you can see the same functions are applied, again HKDF, then AES and we get a lot of rolling proximity identifiers from matching here, down here and those are the ones we have stored and now we can match them and find out which of those rolling proximity identifiers we have seen so far and of course the receiving phone can also make sure that the rolling proximity identifiers belonging to a single diagnosis key which means they belong to one single other phone, are connected to each other so we can also track exposures which have lasted longer than 10 minutes. So for example if you are having a meeting of 90 minutes this would allow the exposure notification framework to get together those up to 9 rolling proximity identifiers and transform them into a single encounter which you then get enriched with those associated encrypted metadata which is basically just the transmit power as a summary down here. So now that we know which data are being transferred from phone to phone we can have a look at the actual architecture of the app itself. This grey box here is the mobile phone and down here is the German Corona one app, it's a dashed line which means there's more documentation available online so I can only invite you to go to the GitHub repository have a look at our code and of course our documentation so there are more diagrams available and as you can see the app itself does not store a lot of data so those boxes here are storages so it only stores something called a registration token and the contact journal entries for our most recent version which means that's all the app stores itself. What you can see here is that it's connected to the operating system API SDK for the exposure notifications so that's the exposure notification framework to which we interface which takes care of all the key collecting, broadcasting and key matching as well. Then there's a protocol buffer library which we need for the data transfer and we use the operating system cryptography libraries or basically the SDK so we don't need to include external libraries for that. What you can see here is the OS API SDK for push messages but this is not remote push messaging but only locally so the app triggers local notifications and to the user it appears as if the notifications, the push message came in remotely but actually it only uses remote local messages. But what would the app be without the actual backend infrastructure? So you can see here that's the Corona 1 app server that's the actual backend for managing all the keys and you see the upload path here, it's aggregated there, then provided through the content delivery network and downloaded by the app here. But we've got more, we've got the verification server which has the job of verifying a positive test result and how does it do that? There's basically two ways. It can either get the information that a positive test is true through a so called teletun which is the most basic way because people call up the hotline, get one of those teletun, enter it into the app and then they are able to upload the diagnosis keys or if people use the fully digital way they get their test result through the app and that's why we have the test result server up here which can be queried by the verification server so users can get their test result through the infrastructure. But that's not all because as I've promised earlier we've also got the connection to other European countries so down here is the European Federation Gateway service which gives us the possibility to A, upload our own national keys to this European Federation Gateway service so other countries can download them and distribute them to their users but we can also request foreign keys and it gets even better. We can be informed if new foreign keys are available for download through a callback mechanism which is just here on the right side. So once the app is communicating with the backend what would actually happen if someone is listening? So we've got our data flow here and let's have a look at it. So in step one we are actually scanning the QR code with a camera of the phone and extracted from the QR code will be a GOD which is then fed into the Corona One app. You can see here it is never stored within the app that's very important because we wanted to make sure that as few information as possible needs to be stored within the app and also that it's not possible to connect information from different sources for example to trace back diagnosis key to a GUID to allow a personification. It was very important that this step is not possible so we had to take care that no data is stored together and data cannot be connected again. So in step one we get this GUID and this is then hashed on the phone being sent to the verification server which in step three generates a so called registration token and stores it together so it stores the hashed GUID and the hashed registration token making sure that a GUID can only be used once and returns the unhashed registration token to the app here. Now the app can store the registration token and use it in step five for polling for test results but the test results are not available directly on the verification server because we do not store it here but the verification server connects to the test results server by using the hashed GUID which can get from the hashed registration token here and then it can ask the test results server and the test results server might have a data set connecting the hashed GUID to a test result and this check needs to be done because the test results server might also have no information for this hashed GUID and this only means that no test result has received yet. This is what happens here in step A. The lab information system, the LIS, can supply the test result server with a package of hashed GUID and the test result so it's stored there and if it's available already in a test result server it is returned to the verification server here in step seven and accordingly in step eight to the app. You might have noted the test result is also neither cached nor stored here in the verification server which means if the user then decides to upload the keys, a ton is required to pass on to the backend for verification of the positive test. An equal flow needs to be followed so in step nine again the registration token is passed to the ton endpoint. The verification server once more needs to check whether the test result server that is actually a positive test result gets back here in step 11, a ton is generated in step 12. You can see the ton is not stored in plain text but it's stored as a hash but the plain text is returned to the app which can then bundle it with the diagnosis keys extracted from the explosion notification framework and upload it to the corona one app server or more specifically the submission service but this also needs to verify that it's authentic so it takes it in step 15 to the verification server on the verify endpoint where the ton is validated and validation means it is marked as used already so the same ton cannot be used twice and then the response is given to the backend here which can then if it's positive so which means if it's an authentic ton can store the diagnosis keys in its own storage and as you can see only the diagnosis keys are stored here nothing else so there's no correlation possible between diagnosis keys, registration token or even GUID because it's completely separate. But still what could be found out about users if someone were to observe the network traffic going on there. An important assumption in the beginning the content of all the messages is secure because only secure connections are being used and only the size of the transfer is observable so we can from a network sniffing perspective observe that a connection is created we can observe how many bytes are being transferred back and forth but we cannot learn about the content of the message. So here we are we've got the first communication between app and server in step 2 because we can see okay if someone is requesting something from the registration token endpoint this person has been tested maybe on that specific day. Then there's the next communication going on in step 5 because this means that the person has been tested I mean we might know that from step 2 already but this person has still not received the test result so it might still be positive or negative. If we can observe that the request to the ton endpoint takes place in step 9 then we know the person has been tested positive. So okay we're using HTTPS so we cannot actually learn which endpoint is being queried but there might be specific sizes to those individual requests which might allow us to learn about the direction the request is going into. Just as a thought okay and then of course we've got also the submission service in step 14 where users upload their diagnosis keys and their ton and this is really really without any possibility for discussion because if a app contacts the Corona1 app server and builds up a connection there this must mean that the user has been tested positive and is submitting diagnosis keys. Apart from that once the user submits diagnosis keys and talks the app talks to the Corona1 app backend it could also be possible to relate those keys to an origin IP address for example or could there be a way around that. So what we need to do in this scenario and what we did is to establish plausible deniability which basically means we generate so much noise with the connections we build up that it's not possible to identify individuals which actually use those connections to query the test results to receive the test result if it's positive to retrieve a ton or to upload the keys. So generating noise is the key. So what the app actually does is simulate the backend traffic by sending those fake or dummy requests according to a so-called playbook so we've got a we call it playbook from which the app takes which requests to do how long to wait how often to repeat those requests and so on and it's also interesting that those requests might either be triggered by a real event or they might be triggered by just some random trigger. So scanning a QR code or entering a tailor ton also triggers this flow a little bit different but it still triggers it because if you then get your registration token with your test results and the retrieval of your test results stops at some point this must mean okay there has been a test result negative or positive if it's then observable that you communicate to the submission service this would mean that it has been positive so what the app actually does is even if it is negative it continues sending out dummy requests to the verification server and it might also so that's all based on random decisions within the app it might also then retrieve a fake ton and it might do a fake upload of diagnosis keys so in the end you're not able to distinguish between an app actually uploading real data or an app just doing playbook stuff and creating noise so users really uploading their diagnosis keys cannot be picked out from all the noise and to make sure that our backend is not just swamped with all those fake and dummy requests there's a special header field which informs the backend to actually ignore those requests but if you would just ignore them and not send a response it could be implemented on the client but then it would be observable again that it's just a fake request so what we do is we let the backend skip all the interaction with the underlying database infrastructure do not modify any data and so on but there will be a delay in the response and the response will look exactly the same as if it was to response to a real request also on the data both directions from the client to the server and from the server to the client gets some padding so it's always the same size no matter what information is contained in this data packages so observing the data packages so their size does not help in finding out what's actually going on now you could say okay if there's so much additional traffic because they're fake requests being sent out and fake uploads being done and so on this must cost a lot of data traffic to the users and that's a good point it is all zero rating zero rated with the German mobile operators which means it's not charged to the end customers but it's just being paid for. Now there's still that thing with the extraction of information from the metadata while uploading the diagnosis keys and this metadata might be the source IP address it might be the user agent being used so then you can distinguish Android from iOS and possibly you could also find out about the OS version and to prevent it we've introduced an intermediate server which removes the metadata from the requests and just forwards the plain content of the packages basically to the backend service so the backend service the submission service is not able to tell from where this package came from. Now for the risk calculation we can have a look at which information is available here so we've got the information about encounters which are calculated at the device receiving the rolling proximity identifiers as mentioned earlier and those information come into us and 30 minute exposure windows so I mentioned earlier that all the rolling proximity identifiers belonging to a single diagnosis key so a single day UTC basically that is can be related to each other but what the exposure notification framework then does is split up those encounters in 30 minute windows so the first scan instance where another device has been identified starts the exposure window and then it's filled up until 30 minutes are full and if there's more encounters with the same diagnosis key basically a new window started and so on. A single exposure window only contains a single device so it's one to one mapping and within that window we can find the number of the scan instances so scans take place every 3 to 5 minutes and within those scan instances there are also multiple scans and we get the minimum and the average attenuation per instance and the attenuation is actually the reported transmit power of the device minus the signal strength when receiving the signal so it basically tells us how much signal strength got lost on the way. If we talk about a low attenuation this means the other device has been very close if the attenuation is higher it means the other device is farther away and from the other way around so through the diagnosis keys which have been uploaded to a server processed on the back end provided on CDN and came to us through that way we can also get information about the infectiousness of the user which is encoded in something we call transmission risk level which tells us how big the risk of infection from that person on that specific day has been. So the transmission risk level is based on the symptom status of a person and the symptoms status means is the person symptomatic asymptomatic does the person want to tell about the symptoms or maybe do they not want to tell about the symptoms and in addition to that if there have been symptoms it can also be clarified whether the symptoms start was on a specific day whether it has been a range of multiple days when the symptoms started or people could also say I'm not sure about when the symptoms started but there have been symptoms definitely. So this is the first case people can specify when the symptoms started we can say that's the symptoms start down here and around that date of symptoms of the onset of symptoms it's basically evenly spread the risk of infection red means high risk blue means low risk. See when you move around that symptoms start they also the infectiousness moves around and there's basically a matrix from greatest information is derived again you can find that all in the code and there's also the possibility to say okay the symptoms started somewhere within the last seven days that's the case up here see it's spread a little bit differently users could also specify it started somewhere from one to two weeks ago you can see that here in the second chart and the third chart is the case for when the symptoms started more than two weeks ago. Now here's the case that users specify that they just received a positive test result so they're definitely corona positive but they have never had symptoms which might mean they are asymptomatic or presymptomatic and again you see around the submission there is a increased risk but all the time before here only has a low transmission risk level assigned. If users want to specify that they can't remember when the symptoms started but they definitely had symptoms then it's all spread a little bit differently and equally if users do not want to share the information whether they had symptoms at all. So now we've got this big risk calculation chart here and I would like to walk you quickly through it so on the left we've got the configuration which is being fed into the exposure notification framework by Apple Google because there's also some mappings which the framework needs from us there is some internal configuration because we have decided to do a lot of the risk calculation within the app instead of doing it in the framework mainly because we have decided we want a 8 level transmission risk level instead of the only 3 levels so low standard and high which Apple and Google provide to us. For the sake of having those 8 levels we actually sacrifice the parameters of infectiousness which is derived from the parameter day since onset of symptoms and the report type which is always confirmed test here in Europe so we got those 3 bits actually which we can now use as transmission risk level which is encoded on the server in those 2 fields added to the keys on the content delivery network downloaded by the app and then passed through the calculation here so it comes in here it is assembled from those 2 parameters report type and infectiousness and now it goes along so first we need to look whether the sum of the durations at below 73 decibels so that's our first threshold has been less than 10 minutes if it has been less than 10 minutes we just drop the whole exposure window if it has been more or equal 10 minutes we might use it depending on whether the transmission risk level is larger or equal 3 and we use it and now we actually calculate the relevant time and times between 65 and 63 decibels are only counted half because that's a medium distance and times at below 55 decibels that's up here are counted full those are added up and then we've got the weight exposure time and now we've got the transmission risk level which leads us to a normalization factor basically and this is multiplied with the weight exposure time what we get here is a normalized exposure time per exposure window and those times for each window are added up for the whole day and then there's the threshold of 15 minutes which decides whether the day had a high risk of infection or a low risk so now that you all know how to do those calculation we can walk through it for three examples so the first example is here it's a transmission risk level of 7 you can see those all are pretty close so our magic thresholds are here at 73 that's for whether it's counted or not then at 63 it's this line and at 55 so we see okay there's been a lot of close contact going on and some medium range contact as well so let's do the pre filtering even though we already see it has been at least 10 minutes below 73 decibels yes definitely because each of those dots represents three minutes so for this example calculation I just assumed the scan windows are three minutes apart is it at least transmission risk level 3 yes it's even 7 so now we do the calculation it has been 18 minutes at a low attenuation so at a close proximity so that's 18 minutes and 9 minutes those are those three dots here at a medium attenuation so a little bit farther apart they count as 4.5 minutes we've got a factor here adding it up it gets us to 25 minutes multiplied by 1.4 giving us 33 31.5 minutes which means a red status already with a single window now in this example we can always see that's pretty far away and there's been one close encounter here transmission risk level 8 even pre filtering has it been at least 10 minutes below 73 decibels nope okay then we already dropped it now there's the third one transmission risk level 8 again it has been a little bit away but there's also been some close contact so we do the pre filtering has it been at least 10 minutes below 73 now we already have to look closely so yes it is below 73 this one as well okay so we've got 4 dots below 73 decibels gives us 12 minutes yes transmission risk level 3 okay that's easy yes and now we can do the calculation it has been 6 minutes at the low attenuation those two dots here okay they count full and 0 minutes at the medium attenuation you see this part here is empty and the transmission risk level 8 gives us a factor of 1.6 if we now multiply there is 6 minutes by 1.6 we get 9.6 minutes so if this has been the only encounter for that day it's still green but if for example you had two encounters of this kind so with the same person or with different people then it would already turn into red because then it's close to 20 minutes which is above the 15 minute threshold now I would like to thank you for listening to my session and I'm available for Q&A shortly. Okay so thank you Thomas this was a pre-recorded talk and the discussion was very lively in the ISE during the talk and I'm glad that Thomas will be here for the Q&A. Maybe to start with the first question by Emhaar in ISE on security and replay attacks. Italy and Netherlands published the TAKs the case so early that they are still valid. We learned that yesterday in the time machine presentation how is this handled in the European cooperation and can you make them adhere to the security requirements. This is the first question for you Thomas. Okay so thank you for this question the way we handle keys coming in from other European countries that's through the European Federation Gateway Service is that they are handled as if they were national keys which means they are putting some kind of embargo for two hours until so two hours after the end of their validity to make sure that replay attacks are not possible. All right I hope that answers the question. And then that was another one on international interoperability is it EUI only or is there also cooperation between EUI and for example Switzerland. So far we've got the cooperation with other EU countries. That's from the European Union which app interoperates already and regarding the integration of non-EU countries that's basically a political decision which has to be made from this place as well then so that's nothing I as an architect can drive or control so far it's only EU countries. All right and then they have some comments and also questions on community interaction and implementation of new features which seems a little slow for some. There was for example a proposal for functionality called Crowd Notifier for events and restaurants to check in by scanning a QR code. Can you tell us a bit more about this or are you aware of this? So I've seen that there are proposals online and that there's also a lively discussion on those issues but what you need to keep in mind is that we are also we have the task of developing this app for the Federal Ministry of Health and they are basically the ones requesting features and then there's some scoping going on. So I'm personally sorry to say that again I'm the architect so I can't decide which feature is going to be implemented it's just as soon as the decision has been made that we need a new feature so we've been after we've been given the task then I come in and prepare the architecture for that. So I'm not aware of the current state of those developments to be honest because that's out of my personal scope. All right I mean it's often the case I suppose with great projects to do the projects. Yeah but overall people seem to be liking the fact that everything is available on GitHub but some people are really dedicated and seem to be a bit disappointed that interaction with the community on GitHub seems a bit slow because some issues are not answered as people would hope it would be. Do you know that about some ideas on adding dedicated community managers to the GitHub community around the app so the people we speak with that was one note and the ISC seem to be changing every month so are you aware of this kind of position of community management? So there's people definitely working on the community management there's also a lot of feedback and comments coming in from the community and I'm definitely aware that there are people working on that and for example I get asked to them by them to jump in on certain questions where clarification is needed from an architecture point of view and that's if you look at GitHub there's also some issues I've been answering and that's because our community team has asked me to jump in there. So, but the feedback that people are not fully satisfied with the way how the community is handled is something I would definitely take back to our team internally and let them know about it. Yeah, that's great to know actually. So people have some answers on that. Maybe one last very concrete question by Duffman in the ISC. Is the inability of the app to show the time day of exposure is a limitation of the framework or is it an implementation choice and what would be the privacy implications of introducing such a feature? Actually a big question but maybe you can cut it short. Yeah, okay. So the only information the exposure notification framework by Google Apple can give us is the date of the exposure and date always relates to UTC there and so we never get the time of the actual exposure back and when moving to the exposure windows we also do not get the time back of the exposure window and the implications if you were able to tell the exact time of the encounter would be that people are often aware where they've been at a certain time and let's say at 11.15 you were meeting with a friend and you get the notification that at 11.15 you had that exact encounter it would be easy to tell whom you've met who's been infected and that's something not desired that you can trace it back to a certain person. So we personification would basically then be the thing. All right and I hope we have time for this last question. Tish ask in ISC have you considered training a machine learning method to classify the risk levels instead of the used rule-based method? So I mean classifying the risk levels through machine learning is something I'm not aware of yet. So the thing is it's all based on basically a cooperation with the Fraunhofer Institute where they have basically reenacted certain situations, did some measurements and that's what has been transferred into the risk model so all those thresholds are derived from basically practical tests. So no ML at the moment. All right so I suppose this was our last question and again Thomas a warm round of virtual applause to you and thank you again for Thomas for giving this talk for being part of this first remote case experience and for giving us some insight into the back end of the Corona one app. Thank you. I was happy to do so. I'm glad you're having me here. Thank you.
|
The German Corona-Warn-App was published on June 16, 2020 and has been downloaded more than 23 million times since then. Data privacy and security have been and are of most importance in this project – even when they are invisible to most users. In this session, Thomas Klingbeil, Solution Architect of the Corona-Warn-App, will shed some light on aspects such as plausible deniability and the risk calculation and show their influence on the overall architecture. When looking at an mobile app, many people forget about the backend. However, especially when designing this component of the overall system it is very important, that it is not possible to learning about users' behaviours and the situation they are in, by observing the data traffic. For the Corona-Warn-App this specifically applies to the test results and the sharing of diagnosis keys in case of a positive diagnosis. To protect users (i.e. to create plausible deniability), the Corona-Warn-App uses a playbook, which simulates a realistically looking communication between mobile app and backend, even if there is no need for communication at that point of time. In this session, Thomas Klingbeil will shed light on those and other mostly invisible aspects of the app (e.g. the risk calculation).
|
10.5446/52083 (DOI)
|
All right. So our next talk is called Hacking Diversity, where we basically tried to treat a really awkward question about the spaces that we move in here, which is that we really have these ideas about inclusion and diversity, but in the end, most of the people that come just look like me. And in open source, most people look like me. And this is extremely strange, right? Because we have all of these ideas about diversity and everything. And today we try to answer the question, why this happens and maybe what we can do about it. Our speaker for this is Professor Christina Tamba-Hester. She's a professor at the University of Southern California, I think. And today she is showing essentially a condensed version of a book that she just wrote called Hacking Diversity. And I'm really looking forward to this talk because I also have asked myself these questions and I don't know the answers. So I'm looking forward to this. Please, Christina. Thank you so much. Thank you for the introduction and thank you for the invitation and thank you for all of your labor to get this remote experience off the ground. So I'm really happy and excited to be here, whatever that is. And I will get into the talk. Let's see here. Okay. Let me know. You should be able to see slides now. If that didn't work, let me know. Okay. Thanks. So, not a best practices talk so much as a first principles and how did we get here talk. My examples are mostly from the U.S. but they are part of a broader Euro-American milieu. And so to get started, I think I want to put up this quote from the Free Software Foundation from 2012. And the goal of this talk is really to give some context. And I think at the almost very end of 2020, it's safe to say that this is a fairly mainstream and uncontroversial topic, but it wasn't always this way. So the quote says, the Free Software movement needs diverse participation to achieve its goals. If we want to make proprietary software extinct, we need to make everyone on the planet engaged with free software. To get there, we need people of all genders, races, sexual orientations, and abilities leading the way. And as I said, I think this is a very recognizable sort of discourse now, but it hasn't always been and I'm going to sort of unpack this for a little while. The outline of the talk, this is a pretty bare bones outline, but there's going to be a lot of sort of history and context, and then a little bit about the value and goal of diversity and how it relates to profits and markets, and also the goal of diversity and how it relates to other values, particularly justice. And I want to note that there's a couple of content warning slides on here, one for people who have been involved in promulgating genocide, and another for a person who has been ejected from hacking for abusive behavior. And so there will be a warning proceeding each of those. Okay, so first, talking about genocide. In the 19th century in the United States, and even into the 20th century, there was an idea of a sort of frontiersman, a brawny man who you can see here, this is a folk hero, but was important enough to still be being represented on television in the 1960s. And the sort of consistent thing here is, I'm going to actually go, this is the genocide one, to these sort of consistent representations. You can see these folks are wearing, well, they're men and they're being manly, and they're wearing animal hide with the implication that they maybe shot the deer themselves. They carry a gun, they're in naturalistic settings, they're sort of rough and ready for anything. I'm drawing here on historian Susan Douglas, who argues that around the turn of the 20th century, society started to change. And so even though there was still this mythos of this brawny frontiersman, what society actually needed was a reconfigured masculinity that didn't sort of have this rough physical brawny masculinity. And so masculinity itself, she says, was reconfigured to what she calls technical masculinity. And so the masculinity was sort of refashioned to be about mastery over machines, and particularly these sort of new cutting edge electronic machines, which in this case was radio. So radio experimentation in the very early 20th century, first wireless telegraphy, and then later wireless sound transmission, which became broadcasting, she argues was a way to sort of refit masculinity for the way the society was changing. It was more urban, there was more specialized division of labor, needing people to work in professional white collar fields with technology as opposed to going out and settling the west. And so here we see technical masculinity entrainment, basically a father with his very young son teaching him this is a way to be in the world. And Douglas argues that this started with ham radio in the early 20th century, but perhaps unsurprisingly, it continued to sort of persist over time. And so my next few slides are showing the same technical masculinity, which is about curiosity, solving a problem, expressing your will with technology only with different sorts of technical artifacts. And so here this is the model railroad made very famous in Stephen Levy's book about hackers. We also have, and this is probably about the 1950s, we have phone freaking. And the 1960s, and here this is also a 2600 magazine from the 80s, sort of continuing to mythologize phone freaking. Going into the 70s, we see this with computers, the homebrew computer club, really important hobbyist formation for both the history of Silicon Valley, but also the history of hacking and free software was people who were sort of building and tinkering and experimenting. And so what we're starting to see here is even as the tech shifts, the technical masculinity stays consistent. And this is probably the early 80s. And this slide is just an ad for a micro computer, but we can see, you know, not only the representation of masculinity at the center, we also see femininity in relation to the technology, which is to say it's just a sort of, you know, ancillary handmade for the sort of male agent here. And as I said, this is all certainly a really cheesy ad, but I think I hope underscores the sort of consistent promulgation of this relationship with technology. And so I want to suggest is that tech here over the 20th and into the 21st century is not just reflecting a legacy of division of which gender prescriptions, gender roles are part of this, but it's actually actively been involved in enforcing this. And so we've got basically a white patriarchal Christian native born supremacy and a global system of racial capitalism. And so I've shown you who's sort of at the top of this hierarchy. We've got colonized subjects, immigrants, women, rural and lower class people, indigenous people coming out on the bottom. And both builders and consumers of tech are implicated in this tension. Okay, so going back even further in the history to sort of where some of this comes from. I'm not sure how many of you thought that in a discussion of hacking, you'd be looking at a 19th century American oil painting. But here we are. This is called American progress. So again, think mythologized American progress from the late 19th century. And as you can probably see, there's a real sort of light to dark element of the photo or of the picture of the painting. And we have this maiden who's really not so much a person, but more like a god. This is sort of Greek iconography, sort of up above everyone, up above man. And we do see technology in the painting. We see the railroads and some ships on the sort of right hand side, which is the east. And so you can tell that she's sort of presiding over everybody settling the west. And again, they're bringing the light, which is civilization. The maiden herself is actually carrying a book which symbolizes knowledge and what may not be obvious, but she's actually got telegraph wire strung around her arm. And then you can see the telegraph kind of behind her. And so this, you know, control over technology is, you know, part of how white settler, you know, newly arrived Americans are maintaining or sort of promoting and maintaining dominance over their new continent, their new continent. And you can actually see that. So we've got these, you know, white settler folks in the center of the painting all the way in the dark are indigenous people. And there's also actually a bear. So there's sort of, again, a biblical hierarchy of, you know, man over the beast. And you can tell that the indigenous subjects and the bear are probably either going to get run out of the frame or sort of forced to become civilized. So this is very deep in how American sort of hierarchy and notions of dominance get promoted and, you know, sort of renewed over time. And this is interesting because this ideology is so strong that it's actually succeeded in basically erasing some of the historical record. Like for instance, we know that there were women, highly skilled women operators in World War II operating the first electronic computers. This is Eniac in the U.S. But they were sort of written out of the record and computing once it became popular and, you know, moved out of a top secret military project. The women's roles were basically effaced and, you know, credit for dominance over and sort of control over the new technology publicly went to men. And again, so we see this sort of sorting happening in all these different ways, even in defiance of the actual historical record. Another instance which may be kind of surprising is, this is a really wonderful article by Lisa Nakamura that I'm drawing from here. This is a fared child semiconductor. So they're based in Silicon Valley. They used to make microchips and associated equipment in Silicon Valley, but they had an intermediate period before outsourcing that stuff to Asia where they opened operations on Navajo Reservation in the American Southwest. And so there's really interesting ways in which race and gender basically become resources for valuing the labor of some kind of people more and other kinds of people less. So this reservation was attractive because regular American labor laws didn't obtain. And they also, managers in their minds thought, oh, there's this history of Navajo weaving and sort of fine fabrication work. And so there's a sort of stereotype that non-white people, particularly women, particularly in this day and age, Asian have, quote, nimble fingers and are going to be really good and diligent at something that we need like electronics assembly to be really sort of diligently done. And so what we've got here is the sort of overlay of Navajo weaving and microchip. So Nakamura calls this insourcing, sort of outsourcing before outsourcing. And now those laboratories and factories have mostly moved to Asia, but the sort of period of experimentation with trying to alienate the labor from the sort of managerial home. And so now we would think like, you know, your Apple products as assembled in China, designed in Cupertino or whatever, that kind of thing. This is a sort of early moment of that. And so again, I want to sort of underscore that race and gender are a resource for global capitalism to assign worth to some people's bodies and work and not to others. Another way that this works, I don't know how much people in the US will remember this let alone outside of the US, but this is a student, a high school student who is a Sudanese American, I believe, who was, you know, a geek and he was enthusiastic about doing a DIY electronics assembly at home where he built a clock. And he brought it to school and the school called police. And so here we can see that, you know, whiteness has been a resource for avoiding criminalization for certain kinds of sort of hacky activities. I'm certainly not saying that no white people have been criminalized for hacking because that's not true, but certain activities get more of a pass based on who's participating in them. And I also want to point out that this legacy of the division and this system of social sorting is flexible. In, you know, 2015, it could easily be turned to Islamophobic purposes, which is what happened here. And so what I want to point out is that there's this sort of like, you know, history of division and really sort of policing who's in bounds and who's out of bounds for the most celebrated category of technological agent. But I also want to sort of introduce the idea that this is not inconsistent in a way with diversity as a market value. Capitalism is actually happy to affirm difference if it can help sell something, even though here we also see the sort of, you know, cultural and even legal system being brought to bear to punish certain forms of difference. Okay, so at this point, this kind of statement is really ubiquitous. This is from 2012 from a tech post, a tech crunch post. In my mind, the women in tech discussion should really be framed as having different people with different experiences and different outlooks helps you build a better product. So this is a pretty different framing of difference than the one I just showed you. But the point is capitalism is actually able to sort of reconcile these contradictions in a way. And you can also see this is my name tag from a Google sponsored event I attended for work for this book. And you know, they're not only saying, you know, we need women to help us build a better product, they're also reflecting back this sort of symbol of femininity, the pink Venus sign, which of course turns a lot of people off, but it's, you know, if you're thinking about marketing, it's a way to symbolize this inclusion, right? Now I'm going to put up the only horribly academic slide I have for the whole talk. This is a quote from Herman Gray, who says, abstract notions of rights and freedom and their expansion to new subjects, he lied the social salience of race and gender as a basis of inequality, even as it culturally recognizes and celebrates differences. So here we can see, you know, the market is happy to recognize and celebrate difference to sort of take up, you know, you know, women in tech or whatever, while sort of papering over and doing nothing to unseat the sort of core, which is that race and gender are basis of inequality. So you can sort of have this lip service, abstract expansion of, you know, new identities. But what is sort of always intact is even if you're sort of bringing one group over and saying, oh, you know, you're part of the dominant group now in some way, the system of sorting is remaining intact. And in a less abstract way, like in the US this summer, there was huge, you know, Black Lives Matter protests, uprisings. And pretty quickly, all these companies started saying, oh, yes, Black Lives Matter, we support this, you know, Amazon was really prominent among them. And yet, Amazon doesn't stop to question whether or not it's exploiting a racialized workforce during COVID with, you know, warehouse work and delivery work. These are some of the lowest paid workers. They are not getting health insurance. They're not getting consistent, you know, hazard pay or protection. And they're dying at disproportionate rates. But, you know, Amazon's very happy to say Black Lives Matter is part of the PR. Similarly, they're still, you know, basically building surveillance equipment. But there's, you know, there's no inconsistency between this sort of recognition and celebration of difference while working to continue to cement that difference and exploit that difference. So all this is to say is that diversity is, in my opinion, a rather toothless value to sort of attach the work and the sort of meaning for, you know, what's at stake with working with tech and with inclusion too. Diversity can sort of bring our attention to these patterns of social difference. But if it ends there, it can actually kind of draw us in the wrong directions without the tools we might need to, you know, actually make some of the more, you know, just disaffirming points that I think are why people are drawn to these topics in the first place. Okay, so after this digression, I'm getting more some into how this relates to hacking and free software. So I've established this sort of legacy of division. And I want to sort of underscore that the hacking and free software milieu has had this commitment to freedom and openness that's definitely been at the core pretty consistently. But historically, this has really had to do with, you know, the freedom and openness has been about controlling technology, some free speech, of course, it's definitely about the individuals exercise of freedom without necessarily a lot of thought about, you know, who the individual is who's maximally empowered to, you know, be free. Or it's been about individuals in collectives, but that are relatively small and relatively homogenous. And so what I want to suggest is this sat within the bigger context of tech and division, but without really acknowledging this, because the, you know, freedom of the individual was presented as a sort of universal value, even though in practice, it really, really wasn't. And I think around 15 to 20 years ago, that really started to change. When I started working on this project, there was already a good deal of agitation forming some of these groups in free software and related projects to especially draw attention to the sort of disparities around women. And Pistar was initially for sort of women, and it was trans inclusive. And, you know, I think pretty quickly, it started as a women, but then it became often non binary and trans inclusive. So not a sort of essentialist version of women. Something happened in 2006 that really caused this topic to really spring to the fore. And a lot of these communities, there was an EU policy report that came out. So the research was from, you know, 2004, 2005, that showed that the rate of participation by women and floss was less than 2%. And that was significantly less even than academic and proprietary computer science. And so that I think really shocked people who would maybe sort of intuitively known, Oh, yeah, this isn't very representative. But that number really galvanized a lot of conversations and got people started talking and organizing basically in new ways. And so I'm now going to show just a handful of sort of what this report caused, which is again, a bunch of conversations. This is from the hackers on planet Earth Hope conference in New York in 2006. And it may not totally be clear what's going on here. But some folks had responded to this statistic on the one hand. And this quote from this United States Senator, who had said something like sort of gibberish about he was supposed to be considering net neutrality and internet regulation. And he said something like the internet is a series of tubes. It's not a truck that you dump something on. And everybody was making fun of him for not even understanding network computing at all. But these activists sort of put these together in a sort of mashup. And they were selling t-shirts actually that said the internet a series of tubes. And as you can see, that's a sort of textbook representation of like a female reproductive system. And so they just sort of brought this to the conference and they were, you know, trying to force a conversation about it because they estimated this is not an official count, but they estimated that there were maybe the ratio of women to men at Hope was like one to 40. And so they just wanted to force a conversation about this. This is an artifact from a little bit later in 2014, but the rise of explicitly dedicated feminist hacker spaces. And this is from the US. And it's just a flyer for a zine making workshop, which is again a pretty mundane thing, but just the sort of difference between the 2006 sort of flag planting and something a bit later where there's actually a separate space here. And crucially, you know, zine making isn't necessarily in bounds with traditional hacking, but it is closer to sort of strands of feminist consciousness raising and riot girl. And so there's a sort of intermingling of these different kind of threads of DIY basically. This is another artifact from someone in Philadelphia who was an artist and a designer and was trying to find a way from the stuff that she knew how to do with, you know, craft and sewing and find a way into electronics and soft circuits and, you know, doing new things. And so she kind of for her own exploration knitted a scarf using ethernet cables. And for her, this was a kind of speculative object that was meant to help her find her way into electronics, but also to kind of start conversations about, you know, why haven't these things gone together. Also seeing gatherings like this one, the more sort of explicitly radicalized feminist hacking convergence and I don't know if everybody can read all the text but says trans futuristic cyborgs, anti racist, anti sexist, gyna punk, DIY, DIY, so taking DIY of a sort of heroic individualist to doing it with others, making it more self consciously collective and less, you know, individualist self reliant. It also says gender hacking, anti capitalism, Libre culture, technologies, bio hacking. So again, a sort of spectrum of politics and interventions around hacking and feminist hacking. And I'm going to dwell for a moment on feminist servers. Haven't spent too much, haven't had too much text on slides, but this one is. So these are artifacts that were on the one hand, basically like an independently maintained server run primarily by women identifying folks, or non masculine identifying folks running free software. But there also a sort of list of networking principles that gets out of that more kind of literal artifactual mode into a more sort of speculative and aspirational sort of politics of what it means to be doing this. And so the first couple, it's actually a very long list and I only have a handful up here. The first couple I think are very consonant with kind of mainstream hacking, wants networks to be mutable and read write accessible and radically questions the conditions for serving and service experiments with changing client server relations. Those again seem kind of axiomatic for mainstream hacking. But then the feminist server starts to go in some other directions is autonomous in the sense that she decides her own dependencies. I think this one's really interesting and important. It's again, it's getting away from this kind of heroic individualistic or almost sort of libertarian sense of autonomy. It's just the autonomy is about deciding where you're dependent and being sort of transparent and open about that. It's not about bootstrapping or being individually self sufficient does not strive for seamlessness division of labor. The not so fun stuff is made by people. That's a feminist issue. That one I think is really important. A lot of hacking that goes on in say a global North context is about the artifacts and the practices in that moment. But here this is if it's not clear, drawing attention to where did that come from? It shouldn't be a seamless experience where you're not thinking about the prehistory, the supply chain of this artifact, which actually started with mining and fabrication and assembly and shipping. And we'll also have a post use life, which might be recycling, might be very hazardous, saving of precious metals by people without good labor protections or might not. But instead of having this all be invisible, sort of drawing it forward, treats technology as part of a social reality. This is a big one, but it's really just sort of opening up the space to acknowledge that legacy of division that I was talking about earlier and takes the risk of exposing her insecurity. I like this one so much. It's really evocative on a few levels. But at the most basic level, what I want to point out is that it's very different than, again, a sort of thread or a strand of hacking that's about owning hard or mastery or something. Instead it's being sort of present with oneself and with others and disclosing insecurities which could be network insecurities or personal ones. So it's taking what it means to be engaging and hacking in all these new and sort of mutated directions. One more example from the sort of feminist hacking that I want to just tell you about for a second was this exercise. I was at a feminist hacking convergence in Montreal in 2016 and people did a exercise in understanding public key cryptography as a dance where instead of learning about this theory, people actually tried to embody it. So placing your body in the relationship with tech and often some of these things happen in kind of explicitly separate spaces, but going through the principles of cryptography in spontaneously choreographed dance and then performing it all together. Okay, so these are some of the, again, ways, the mutant strains of feminist hacking. I don't want to suggest that this has been just a very linear and conflict free progression. And so I do want to dwell for a moment on just a single instance of conflict which probably the details will be unfamiliar, but there might be a sort of wider recognition, I think. So this is from Hacker Space in Philadelphia in 2011 and a handful of members of the space proposed holding an event to hack sex toys. And they thought it was a pretty uncontroversal suggestion, you know, the same as, you know, having an Arduino night or, you know, I'm making stuff up, but, you know, they sort of put it out there as this like, well, let's do this on this Saturday. And they were really surprised when a bunch of other members of the space were very opposed to it. And this is in the book, it's a design for a DIY flogger made from a bicycle tube. And this was on the proposed sort of flyer for the event. And so what happened was they were really surprised that other people in the space were sort of like, no, no, we don't want to have this here. We don't think it's appropriate. And so here's a quote from one of the people who was opposing the event. And he says, a lot of the hackers here at the space are the Make Magazine Instructables type, not the Julian Assange Hope Conference attending type, or even the kind that cares much about a global movement of hacker spaces. I'm not sure what category dildo hacking falls in for a lot of people DIY has to do with the sort of father son nostalgia. And quote. So this is really interesting. Because we've got this acknowledgement of hacking being a variety of things, right? And maybe again, for some people in European contexts where hacker spaces are often more political, maybe this Make Magazine sort of home project personal fabrication will be a little bit unrecognizable or even disappointing, but it is part of hacking and making in the US. And then of course, there's the information wants to be free, Hope Conference, you know, lock picking all these kinds of things, hacking that he acknowledges. But he he says, I'm not sure what dildo hacking is, maybe suggesting it's not even hacking at all. And then he says, for a lot of people, DIY has to do with this father son nostalgia, which I hope might make you think of the picture I had up at the very beginning of the father son with the radio apparatus. And so it's really interesting with this sort of proposal that these people didn't think of as being controversial, turned into this, you know, pretty full on argument about what even hacking is in the sort of essential way. And so here's a reply from one of the people who had proposed the workshop. And she says, So my concern here is that it's a hacker space. Initiative shouldn't be punished, particularly initiative that shakes up old patterns. Our space is really stratifying into hardware tinkering as the core interest and white males as the demographic. I'm really frustrated. And quote. And so this, again, I assume that this is fairly recognizable to folks, right? That's sort of if the core of what hacking is, is taking it upon yourself to take artifacts and practices that you already know how to do in a new direction, like that's what hacking is, according to a lot of people. And so she's really surprised and really dismayed and really, I think, felt very hurt and rejected that this was flaring as controversy. And was really surprised that people were sort of raising the prospect that dildo hacking was this interruption of a nostalgic father son tech practice that was somehow offensive. Certainly it seems like part of the problem might have been the introduction of sexuality and maybe questions about who's sexuality, sexuality that didn't seem to center straight men. What happened was this didn't get resolved. The people who had proposed the workshop included women, men and non binary people actually left. They decamped to a new space that was forming that was forming with more kind of feminist hacking principles and welcomed them there. And the first space stayed how they were and didn't have to keep having conflicts and grapple with this kind of controversy anymore because the people and they weren't kicked out, but they decided to leave. And so, you know, I know these conflicts have been very painful and alienating for people who have experienced them, even though maybe the content of this one seems almost funny or something in hindsight. But what I want to propose is that part of why this has been so difficult for people in these spaces is that people are actually wrestling with this whole legacy of division that I laid out in the first part of the talk. So it may feel like you're just having an argument with your fellow group members who are a lot like you, but then you're breaking down along some kind of line that you both can't cross over to with the other one. But there's a sort of really deep sedimentary layer of who has been anointed the sort of power of agency over tech and for whom that has been sort of a taken for granted task that assumption and who's had to sort of assert their presence or their right to be there in different ways. And so when there are these conflicts and flashpoints, all of that stuff is there. And that's actually really hard to solve anywhere. But it's very, very hard to solve in elective, volunteeristic associations, I think also. So not not to say impossible, but like there's a reason these conflicts are difficult. Okay, so returning to diversity. And this is the same quote, I won't read it again. But the sort of idea that women in tech are there to bring forward different experiences and build a better product. Diversity is maybe necessary to start these conversations of the idea of diversity. But I don't think it's sufficient for the purposes here. It's too easily sitting alongside market values, which I think are not what people in hacker spaces are primarily most interested in. And that's not really why they're there. And it's also very easily steered away from the important political work that I think people in hacking communities often want to do. It can sort of mutate into this, you know, contradictory thing where you've got sort of market values on the one hand and something that isn't what you set out to do on the other hand. And I'm going to illustrate that with this somewhat more provocative example. This is a meme I stole from the internet. But the point here is that you can make these diversity affirming slogans. You know, here we've got Black Lives Matter and Yes, We Can and LGBT sort of flags or slogans on a bomber, right? You can make these diversity affirming slogans fit within a system that is fundamentally violent, carceral, militarized. It doesn't necessarily challenge the system itself to bring forward individuals identities as members of marginalized groups. In fact, capitalism is actually quite happy to resolve what might seem like contradiction here by commodifying identity, selling it as a brand without resolving the fundamental tensions that we know that are here that have to do with social power and dominance and exploitation. So coming back to the free software quote from the beginning. As I said, this sort of hit consensus, but I'm actually going to argue it's not really going far enough. Diverse participation and making proprietary software extinct are fine. But I think they actually do not fully capture what's at stake in these, again, very tough conversations that have been happening in hacking and free software groups. And so, you know, we might think of this as, again, a point of entry, but we might want to take it a bit farther. And this is as far as I'll go with prescriptions or how to. So specific in local, volunteeristic communities, you know, that are either your hacker space in the city you live in or the project that you that's distributed, but that you work on. So articulate values and politics. Diversity is a good one, but I'm going to say it's necessary and not sufficient. And some of the things that I talk about in the book include like other forms of political beliefs like decolonization or attention to militarism that can actually sort of force you to have sometimes harder conversations, but ones that can clarify values and goals. Obviously, I don't need to tell, you know, hacking groups, but keep theorizing and keep experimenting. That is a way, you know, whether it's crypto dancing or not, it's a way to sort of like walk yourself through what you're trying to sort of build and iterate. And within space, within spaces, I think at this point, this is fairly uncontroversial, but I do chronicle in the book how people got here, making and enforcing rules, having conversations, sometimes one on one, right, not a sort of public conflagration, flame more, but, you know, if people feel safe, you know, respect each other enough to actually talk through what is the sort of point of contention or difference and see if you can understand one another. The other thing I want to point out, though, is that there's a whole lot of stuff going on here that is much, much bigger than the spaces and communities that you're in. And so it is kind of a mistake and no one's fault that you can't solve all of this in the groups that you're in. And so there also has to be much bigger society-wide goals that we, you know, all have our eyes on, because if we solve some of this stuff, then low and behold, quote, diversity in tech would be a lot easier and probably less fraught and contentious. But things like demilitarization, by-change justice, basic social equity, workplace fairness, public reconciliation, I'm giving, U.S. examples here, reparations, land back. And obviously the one that's coming for all of us, climate, is going to be, you know, the biggest problem. It already is the biggest problem in terms of, you know, racial and economic and environmental justice worldwide. So in conclusion, my little take home slogan is that there's no hack or tech audit for justice, but there are these different levels and, you know, you can work on one and work on another, but you can't solve the really big stuff in the sort of tech domain. And that's not a shortcoming and it's not for lack of trying. That is all. I'm very happy to quit talking so much and move to Q&A. Thank you so much for your attention. Thanks. All right. Thank you. Thank you. All right, everyone. Questions on Twitter, Mastodon, RC3, 2 on ISE. We will wait for a little bit and ask the question in the meantime. So this research for this book, when did you actually do it, like, time-wise? Yeah. It started, it actually, we were talking before, we had an audience a little bit about radio and my earlier project was about people building radio stations and try to be brief, but they had a very emancipatory set of ideas about what it meant to teach people how to build electronics or solder a transmitter board or something, but they kept running into some of these patterns of exclusion that I mentioned. And so it was actually through them that I heard about these conversations that were starting to happen in hacking and open source communities where people were trying to directly head on, confront some of this stuff. So I think I heard about it in around the 2006 era, started working on it, maybe, it's about 2010 to about 2015 is the period that I was actively going to conferences and meetups and spaces and interviewing people. So it's a sort of snapshot. Yeah, that's the sort of stance. Thanks. All right. That's very interesting because I kept thinking if you had encountered this sort of the rise of the alt-right or something like this, because I feel like in the last couple of years, these discussions have just become so much more radicalized and not from the left, but from the right, like where you can basically no longer talk about this without just all hell breaking loose, right? I think that's a really interesting point. And I think you're right. This does, I mean, I was finishing the book during the Trump era over here, and I know you've got your own counterparts in Europe. But this is all very much within a kind of Obama liberal neoliberal framing. And actually something I wrote about, I think, is in the intro of the book is the Obama White House had women in STEM as part of a women and people of color in STEM as part of a kind of national security and nationalist agenda, basically, on their page, and the Trump administration took it down. So I think, and also in the book, there's a discussion of a channel for Polish Python users where they were like fretting about how to ban Nazis from the channel and whether Nazis were just people showing up and throwing swastikas all over the IRC channel, whether that was trolling or whether it was real Nazis. And yes, I think the sort of stakes of some of this has gotten a lot more stark. And so in certain ways, the sort of which side are you on questions are easier. But the sort of depth of what's at stake and what's being defended is maybe harder. So yeah, the political context is or the sort of temporal is really is part of this. Yeah. All right, now we turn to the IRC. Have you looked into the woman in floss as perhaps being one with predominantly engineers as mothers slash fathers? Sorry, could you repeat women? I think the question is whether you have sort of noticed a pattern that women that get into these spaces are sort of by their parents have encountered engineering, I think, as familiar context. Sure. I, yes, I have not personally done research on that, but it does other, you know, sort of historical and more sociological research shows that people with who are exposed, you know, at a young age that that's part of the differential. And even even in households where, say, a computer came home early on, we're talking about a slightly older generation, a computer came home early on because, you know, parents brought it into the house. You know, boys were more likely to sort of claim it is theirs or take time on it or start playing with it, even a couple or a few years earlier than girls. And so, yeah, I haven't looked at that, those sort of life narratives directly, but other people have and I draw on that. And that's also something I am hearing now, you know, from people who are adults and are thinking about these problems and how they want to not have their own kids encounter the same problems or sort of legacy of division. You definitely hear people saying, you know, I want this to get solved. So my daughter doesn't have a hard time. But that's that's a little outside of what I've looked at, but it feeds in. Yeah. All right. All right. This is a slightly longer question. I'll try to do my best. I've witnessed a lot of white feminism in FOSSO, that's free open source software, right? And FOS diversity, equity and inclusions, DEI spaces is intersectionality sufficiently recognized as an issue. In FOS feminism, or is it actually worse off due to the low number of women in FOS, around 2%. Great. Yeah. So I couldn't, a first I will flag that the numbers in FOS have started to change. There's later research that shows that there are up some. The question about white feminism is a really good one. And I do write in the book about people sort of grappling with that. And so the sort of trajectory was the first category that people started to notice of exclusion was women. And I think I discuss how women opened up pretty quickly to being non-essentialist and again, inclusive of trans and non-binary sorts of identities. But I think that the race and what I sometimes talk about is sort of global positioning, global North hackers in Europe and North America. It is harder, I think for them to sort of deal as head on with race and I mean, these are fundamentally questions of racial capitalism. And so being positioned within fairly well-advantaged global North communities, it is harder to confront some of those issues. I think there's a consciousness of it, but I would say it's a lot, what I observed was a lot greater awareness and sort of development of potential solutions for being inclusive of women than a sort of really broadly intersectional notion of women, including people in global South positions and in racialized categories in the global North. And again, I think there's been a sort of probably shift in attention to that, some of which post-dates the period in the book. But I also think that that is, it's uniquely hard, I think, to solve involuntaristic groups because the forces, at least in the US, and I would speculate in Europe as well, like the forces that cause inequality and segregation and the tech industry is a really good place to see these contradictions. Like what's going on now with Google and the firing of Dr. Tim Jibru is places where there's a sort of capitalistic incentive are not going to be able to solve these problems of inequality because the profit motive is always going to be there to build surveillance tech, to assist countries that want to build prisons. Again, this is what's coming with climate stuff. And so saying, oh, you need to hire more black women or something is like running smack into these contradictions. And this is part of why I say this really can't be solved within tech. And these are very big thorny issues. Another thing, the final thing I'll point out, this is sort of rambling, is for a voluntary group, it's going to be easier to make fairly small interventions. And so I think that that's, I actually have somebody talking about this, like if we make the space more inclusive to anybody and say, you know, bad behavior isn't here, or isn't welcome here. You know, that can hit a note where it might cause there to be a sort of more inclusive community that would be welcoming to a bunch of different kinds of folks, but it's not necessarily realistic to tailor in a volunteeristic group that's small, a response to the sort of forms of exclusion, all the kinds of different people have experienced. And so, again, I think of this as kind of a question of scale. But I really do think that the sort of way that volunteeristic groups, i.e. not the market, not workplaces, articulate, you know, what they think the problems are and what they can, you know, how they can sort of begin to talk about solutions are really important precisely because they're not hamstrung by the same contradictions that for-profit spaces are. That was a long, it's a really great question. I do take it up some the people I was writing about, I think we're starting to take it up some, it's probably more full-throated now, and it's very complicated. Yes, all of these things. Yeah. All right, we have an interesting question. Would you advise people to try to change communities from within or just start new structures with more intersectional spaces? I don't have a great answer to that. I think it's, it is kind of the pressing question of the day, I think, in a lot of spaces. And I see good answers on both sides, and I think it depends perhaps. I do see a virtue in some space being set aside, but how that, you know, separate space chooses to interface with a sort of wider space is going to vary. And I don't think it's necessarily a binary, like you're either totally outside or you're within having a big discussion about how to be maximally inclusive. I think those things are always kind of dialogically happening. But I've seen people argue both sides of it and I've seen, I think, compelling answers on both sides of it. But yeah, it is kind of the place where the idea that we're sort of all taking up this project together can start to, you know, break down. And some people think you're really losing a lot of people go off and stop, you know, working together as some sort of unified group. And so, yeah, I don't have a great answer to that. I do write about it in the book. And I would say it depends on what the goals are. I think having some separate space is probably important in any event. Yeah, it seems like these kind of hacker spaces have at least the advantage of being able to accommodate subgroups, right? So you can have these certain events or these certain working groups that can focus on these issues. For example, I think our host today, the Exheim Hacker Space in Berlin, they started this talk series, Gespräche unter Bäumen, which is just talks below the tree. They have an LED tree in their hacker space. And it just sort of naturally happened that it would have only women as speakers. And it was just this lovely natural evolution of just having much more interesting topics and not just, you know, the traditional male hacker kind of topics. So I think it's really cool when you just have these ability to have these initiatives inside existing spaces somehow. But just a run from my side. Oh, someone had a question. The title of the book is just Hacking Diversity, right? I think we mentioned us at the beginning. Yeah, I think the whole title, yeah, if you look for Hacking Diversity, you'll find it. My name, Princeton University Press. Yeah. Nice. Oh, and I'll be shameless and say it's on very deep sale right now. If you were to buy it from Princeton directly, there's a discount code and it's on my Twitter. It's, I think it's HDVS. Anyway, it's 40% off through like February. Nice. Yeah. All right. It's very affordable. Can you comment on how structures like GitHub that predominantly value code submissions and other highly formalized tasks over community building and less technical contributions play into this nexus? Yes, absolutely. I mean, historically, the focus on the artifact, what you get produced, the code, even hardware has taken on this sort of exalted symbolic meaning and it has definitely contributed to both the denigration and the invisibility of people who weren't doing that kind of work and who might be doing community building or even, you know, things documentation or translation, right, with its being global practices that the sort of authors of the code are getting this sort of, you know, priesthood status and everyone else is sort of lower. I think, again, awareness of that is starting to change, but it's definitely contributed to, again, the historical sense that there was underrepresentation of some kinds of folks. And I think there are ways you can, I mean, it sort of starts with raising awareness of this. But again, that sort of signal, the celebration of the technologist is coming in from all these other places in the culture. And so, deprogramming that or something as it were is tough but not impossible. And again, I see that actually at least here as part of a sort of bigger culture war. And, you know, the idea that that sort of tech is the, you know, godly apparatus and everything else is, you know, humanities and squishy, soft stuff we don't need that's going to, you know, fall away. Yeah, it's, it doesn't have to be as big of a topic as that. But that's, again, it's all kind of in there. I don't know if that answered a question. But yes, that's there. And I think that's something that the first step in addressing it can be acknowledging it and building forms of collaboration and that are not just sort of like nominally non hierarchical, but specifically raising visibility and sort of credit giving to other kinds of contributions. So do you feel as someone that is actually a science and technology scholar that this feel as like is finally getting recognized as something that exists and is real? Because I always have this impression that people just assume this doesn't exist or no one thinks about this except them. And as an entire academic field about it, do you think this is changing or is it just the same as always? I don't know. I mean, I think that there's a, there's a lot of visibility on the one hand. And even, you know, something in the US with and who knows what'll be happening after COVID, but, you know, public school systems were having their budgets cut after the first financial crisis in 2008. And one of the things that was being proposed was, you know, moving a hacker space into a high school and sort of having that, you know, come forward and do things that institutions had maybe once been doing. I think that that, again, I'll keep coming back to the tension between what I think some of the most interesting, the volunteeristic and politicized sort of goals for these kinds of activities. Them versus what the market wants them to do are sort of intention. And there was a moment where I was interviewing someone maybe in, I want to say 2012, and I was asking him questions about free software. And he was very kind, but he said something like, why are you asking me about free software? Like that's dead, you know, we mean opens, like open source one, sort of. And I'm not the only person who's written about that at all. But I think the sort of idea that there's something here that can't just be, you know, coopted by a market, like that's the hard part. And I mean, I think there is a lot of there's continuing to be a lot of attention to, you know, hackathons and coding boot camps and these kinds of things. But I don't know, I guess I'm sort of too inside and outside at the same time to have a good answer. I think that there's a well established body of like scholarly recognition of these activities. People look at me less weird talking about this than a book about radio in the 21st century. But I think the sort of, you know, really sustained work to sort of disarticulate or disentangle some of this from industry where it's getting the sort of most not just attention, but the sort of most celebration and the ways that that can kind of distort, I think some of the other intentions that is is always going to be tough. All right, wonderful. I think we're out of time. So thank you very much. Everyone buy the book and have a good night. Bye bye. Good day. Thank you so much. Thank you. Thank you. Thank you. Thank you.
|
A firsthand look at efforts to improve diversity in software and hackerspace communities Hacking, as a mode of technical and cultural production, is commonly celebrated for its extraordinary freedoms of creation and circulation. Yet surprisingly few women participate in it: rates of involvement by technologically skilled women are drastically lower in hacking communities than in industry and academia. Hacking Diversity investigates the activists engaged in free and open-source software to understand why, despite their efforts, they fail to achieve the diversity that their ideals support. Christina Dunbar-Hester shows that within this well-meaning volunteer world, beyond the sway of human resource departments and equal opportunity legislation, members of underrepresented groups face unique challenges. She brings together more than five years of firsthand research: attending software conferences and training events, working on message boards and listservs, and frequenting North American hackerspaces. She explores who participates in voluntaristic technology cultures, to what ends, and with what consequences. Digging deep into the fundamental assumptions underpinning STEM-oriented societies, Dunbar-Hester demonstrates that while the preferred solutions of tech enthusiasts—their “hacks” of projects and cultures—can ameliorate some of the “bugs” within their own communities, these methods come up short for issues of unequal social and economic power. Distributing “diversity” in technical production is not equal to generating justice. Hacking Diversity reframes questions of diversity advocacy to consider what interventions might appropriately broaden inclusion and participation in the hacking world and beyond.
|
10.5446/52087 (DOI)
|
Alright, fellow creatures, to be honest, I never thought that I would be introducing a talk on measuring radioactivity like ever in my life. But then again, considering the world's current state at large, it might be not such a bad idea to be prepared for these things, right? And gladly our next speaker, Oliver Keller, is an expert in detecting radioactive stuff. Oliver is a physicist and works at one of the most prominent nerd-happy places. The CERN since 2013 is also doing a PhD project about novel instruments and experiments on natural radioactivity at the University of Geneva, and to add even more RC3 pizazz. Oliver is active in the open science community and passionate about everything open source. All that sounds really cool to me, so without further ado, let's give a warm virtual welcome to Oliver and let's hear what he has to say about measuring radioactivity with using low-cost silicon sensors. Oliver, the stream is yours. Thanks, that was a very nice introduction. I'm really happy to have this chance to present here. I'm a member since quite some years and this is my first CCC talk, so I'm quite excited. Yeah, you can follow me on Twitter or I'm also on Maserdon, not so active and most of my stuff is on GitHub. Okay, so what will we talk about in this talk? I'll give you a short overview also about radioactivity because it's a topic with many different details and then we will look at the detector more in detail and how that works in terms of the physics behind it and electronics. And then finally, we'll look at things that can be measured, how the measurement actually works, what are interesting objects to check and how this relates to silicon detectors being used at CERN. So the project is on GitHub called DIY Particle Detector. It's an electronic design which is open hardware. There's a wiki with lots of further details for building and for travel shooting. There is a little web browser tool I will show later briefly and there are scripts to record and nicely plot the measurements. Those scripts are BSD licensed and this is written in Python. There are two variants of this detector. One is called electron detector. The other one alpha spectrometer. They use the same circuit board, but one is using four dites. The other one, one photodite. There's a small difference between them, but in general it's pretty similar, but the electron detector is much easier to build and much easier to get started using. Then you have complete part lists and even a complete kit can be bought on kitspace.org which is an open hardware community repository and I really recommend you to check it out. It's a great community platform and everyone can register their own GitHub project quite easily. This is a particle detector in a tin box. You can use the famous Altoids tin box or something for Swiss chocolate for example. You can see it's rather small board about the size of a 9 volt block battery and then you need in addition about 20 resistors capacitors and these silicon diodes plus an operational amplifier which is this little chip here, this little black chip here on the right side. You can see it's all old school large components. This is on purpose, so it's easy to solder for complete electronic beginners. This by the way, this picture is already one user of this project to post it their own build on Twitter. Natural radioactivity. I would say it's a story of many misconceptions. Let's imagine we are this little stick figure here on the ground. Below us we have uranium and thorium. We also have potassium-40 in the ground and potassium-40 is pretty specific and peculiar. It actually makes all of us a little bit radioactive. Every human has about 4,000 to 5,000 radioactive decays every second because of the natural potassium and natural potassium comes with a radioactive isotope which is just everywhere. It's in bananas but it's also in us because we need it for our body chemistry. It's really important. Even some of those decays are even producing antimatter. How cool is that? What would we be measuring on the ground? Well, there could be some gamma rays or electrons. Those are from beta decays. Or from the uranium, there is one radionuclide appearing in the decay chain which is called radon. Radon is actually a gas. From the ground, the radon can diffuse upwards and travel with air and spread around. It's a bit like a vehicle for radioactivity from the ground to spread to other places. That radon would decay with alpha particles producing electrons in beta decays and also gamma radiation further down in the decay chain. Just to recapitulate, I've said it already twice. Alpha particles are actually helium nuclei. It's just two protons and two neutrons and the electrons are missing. In a beta decay, basically one neutron is transformed into a proton and an electron. There's also an electron anti-neutrino generated but this is super hard to measure. We're not measuring those, mostly we'll be measuring electrons from beta decays. That's why you see all these little E's indicating beta decays. If we would go to the hospital here on the left side, we would probably find some X-rays from checking our bones or something like this. Or even gamma rays or alpha particles being used in treatments or very modern even proton beams are sometimes generated for medical applications. Here on the right side, if you go close to a nuclear power plant, we probably measure nothing unless there's a problem. In this case, most likely we would find some gamma radiation but only if there's a problem. Then actually, this is not the whole story, this is terrestrial radiation. But we also have radiation coming from upwards, showering down on us every minute and there's actually nothing we can do against it. So protons are accelerated in the universe, basically the biggest particle accelerator nature has. Once they hit our atmosphere, they break apart into less energetic particles and it's many of them. In the first stage, there's lots of pions generated and also neutrons. But neutrons are really hard to measure, so I'll ignore them for most of the talk. Then those pions can decay into gamma rays and then trigger a whole chain of positron electron decays which again create gamma rays and so forth. This goes actually the whole way down to the earth, we will have a little bit of that on the sea level. The other more known part of atmospheric radiation is actually muons. Some pions decay into muons, which is kind of a heavy electron. Also, neutrinos, but neutrinos are again very hard to measure, so I'll ignore them for most of this talk. If you look here on the right side on this altitude scale, you'll see an airplane would be basically traveling where most of the atmospheric radiation is produced and this is why if you go on such an airplane, you have actually several times more radiation in there than here on earth. Of course, on the ground it also depends where you are, there are different amounts of uranium and thorium in the ground. This is just naturally there, but it depends on the geology of course. I've talked quite a bit about radiation and I'm saying I want to use silicon to detect it. What radiation exactly? Maybe let's take a step back and think about what we know maybe from school. We have this rainbow for visible light. This is in terms of wavelength, we have 800 to 400 nanometers spanning from the infrared red area to overgreen to blue and into the violet. Lower than those wavelengths are let's say bigger, millimeter waves, meter waves and even kilometer that would be radio waves. Radio frequencies for our digital communication systems, Wi-Fi, mobile devices and so forth. But I want to look actually more towards the right because that's what we are measuring with these detectors. It's shorter wavelength, which actually means higher energy. On the right side, we would be having ultraviolet radiation, which is kind of at the border to what we can measure. These 800 to 400 nanometers translate into 1.5 to 3 electron volts, which is a unit that particle physicists really prefer because it basically relates the energy of an electron after it has been accelerated by one volt and makes it much easier to work with nuclear or particle physics because everything, all the energy is always related to an electron. This formula here is just a reminder that the wavelength can be always converted into energy and it's inversely proportional. So wavelength increases to the left and energy to the right. And if you increase energy more from the visible range, so let's say thousands of electron volts, then we arrive here, millions, mega electron volts, even giga electron volts. And there is now a pretty important distinction between those two areas. And that is the right one is ionizing radiation and the left one is non-ionizing radiation. A UV is a little bit in the middle of that, so some parts of the UV spectrum can be ionizing. It also depends a lot on the material that the radiation is interacting with. For these detectors, I'm talking about today and alpha-beta-gamma radiation. This is all ionizing. So some examples, lowest energy on the lower spectrum would be x-rays, then electrons, gammas from radioactive radionuclides that I already talked about in the previous slide, alpha particles. And then muons from the atmosphere would be more in the giga electron volts range. And so forth. So these higher energies, of course, you need something like the LHC to accelerate particles to really high energies. And then you can even access the terri-electron volts regime. Okay, silicon diodes. What kind of silicon diodes? I'm using, in this project, low-cost silicon pin diodes. One is called BPW34. It's manufactured from Wisch-Heyer or Osram, cost about 50 cents. So that's what I mean with low-cost. There's another one called BPX61 from Osram. It's quite a bit more expensive. This is the lower one here on the right. It has a metal case, which is the main reason why it's more expensive. But it's quite interesting because that one we can use for the alpha detector. If you look closely, there is a glass on top. But we can remove that. We have a sensitive area, so this chip is roughly 7 square millimeters large. And it has a thickness, a sensitive thickness of about 50 micrometer, which is not a lot. So it's basically the half of the width of a human hair. And in total, it's a really small sensitive volume, but it's enough to measure something. And just as a reminder, how much of gamma or x-rays we would detect with this? Not a lot because this high energetic photon radiation kind doesn't interact very well in any kind of matter. And because the sensitive area is so thin, it would basically permeate through it. And most of the times not interact and doesn't make a signal. OK, what's really important, since we don't want to measure light, we have to shield light away. So we need to block all of the light. That means the easiest way to do that is to put it in a metal case. There it's electromagnetically shielded and completely protected from light as well. Electromagnetic radiation or radio waves can also influence these detectors because they are super sensitive. So it should be a complete Faraday cage, a complete metal structure around it. There's lots of hints and tips how to achieve that on the wiki on the GitHub of this project. OK, let's think about one of those pin diets. Normally, there is one part in the silicon, which is n-doped, negatively doped. And there's another part, usually, which is positively doped. And then you arrive at a simple so-called PN junction, which is a regular semiconductor diet. Now, pin diets add another layer, a so-called intrinsic layer, here shown with the eye. And that actually is the main advantage why this kind of detector works quite well. It has a relatively large, sensitive sickness. So if you think about, let's say, a photon from an x-ray or a gamma decay or an electron hitting the sensor. So by the way, this is a cross-section view from the side. But OK, that doesn't really matter. But let's say they come here from the top into the diet, and we're looking at the side. Then we have, actually, ionization, because this is ionizing radiation. So we get free charges in the form of electron-hole pairs. So electrons would be here, the blue ball, and the red circle would be the holes. And depending on the radiation kind, how this ionization takes place is quite different. But the result is, if you get a signal, it means there was ionization. Now if just this would happen, we could not measure anything. Those charges would quickly recombine. And on the outside of the diet, there would be a little signal. But what we can do is we can apply, actually, a voltage from the outside. So here, we just put a battery. So we have a positive voltage here, a couple of volts. And then what happens is that the electrons will be attracted by the positive voltage, and the holes will travel to negative potential. And we end up with a little net current or a small bunch of charges that can be measured across the diet as a tiny, tiny current peak. The sensitive volume is actually proportional to the voltage. So the more voltage we put, the bigger is our volume and the more we can actually measure. With certain limits, of course, because the structure of the pin diet has a maximum thickness just according to how it is manufactured. And these properties can be estimated with CV measurements. So here, you see an example of a couple of diets, a few of the same type, the two that I've mentioned. They are different versions. One has a transparent plastic case. One has a black plastic case. It doesn't really matter. You see basically in all the cases, more or less the same curve. And as you increase the voltage, the capacitance goes down. This is great and basically shows us that those silicon chips are very similar, if not exactly the same chip. Those differences are easily explained by manufacturing variances. And then because this actually, if you think about it, it looks a bit like a parallel plate capacitor. And actually, you can treat it as one. And if you know the capacitance and the size, the area, you can actually calculate the distance of these two plates or basically the width or the thickness of the diet. And then we arrive at about 50 micrometer if you put something like 8 or 10 volts. OK. Now we have a tiny charge current. Now we need to amplify it. So we have here a couple of diets. I'm explaining now the electron detector because it's easier. We have four diets at the input. And this is the symbol for an operational amplifier. There are two of those in the circuit. The first stage is really the special one. So if you have a particle striking the diet, we get a little charge current hitting the amplifier. And then we have here this important feedback circuit. So the output is feedback into the input, which in this case makes a negative amplification. And the amplification is defined actually by this capacitance here. The resistor has a secondary role. The small capacitance is what makes the output voltage here larger, smaller than capacitance, the larger the output, and it's inverted. Then in the next amplifier step, we just increase the voltage again to a level that is useful for using it later. But all of the signal quality that has been achieved in the first stage will stay like that. So signal to noise is defined by the first stage. The second one is just to better adapt it to the input of the measurement device that's connected. So here this is a classical inverting amplifier where just these two resistors define the amplification factor. It's very simple. It's just a factor of 100 in this case. And so if you think again about the charge pulse and this circuit here is sensitive, starting from about 1000 liberated charges in those diets as a result from ionization, we get something like 320 microvolt at this first output. And this is a spike that quickly decreases. It's basically these capacitors are charged and quickly discharged with this resistor. And this is what we see here. And then that is amplified again by a factor of 100 and then we arrive at something like at least 32 millivolts, which is conveniently a voltage that is compatible with most microphone or headset inputs of computers or mobile phones. So a regular headset here has these four connectors and the last ring actually connects the microphone. The other is ground and left, right for the earbuds. Okay. How do we record those pulses? This is an example of 1000 pulses overlaid measured on an oscilloscope here. So it's a bit more accurate. You see the pulse is a bit better. This is kind of like the persistence mode of an oscilloscope. And the size of the pulse is proportional to energy that was absorbed. And the circuit is made in such a way that the width of the pulse is big enough such that regular sampling frequency of a sound card can actually catch it and measure it. Yeah. This is potassium salt. This is cut here. This is called a low salt in the UK. It's also German variants. You can also just buy it in the pharmacy or in certain organic food stores as a replacement salt. On the right side is an example from this small column by stone, which has traces of uranium on it. And this is measured with the alpha spectrometer. And you see those pulses are quite a bit bigger. Here we have 50 microseconds. And here we have more like one milliseconds of pulse width. Now there's a software on a browser. This is something I wrote using the web audio API and it works on most browsers. Best is Chrome on iOS. Of course you have to use Safari. And that records once you plug the detector, it records from the input at 48 or 44.1 kilohertz the pulses. Here's an example with the alpha spectrometer circuit. You get these nice large pulses. In case of the electron detector, the pulse is much shorter. And you see the noise much more amplified. This red line is kind of the minimum level that the pulse needs to trigger. It's bigger than that, like the trigger level of an oscilloscope. You can set that with those buttons in the browser. You need to find a good value. Of course if you change your input volume settings, for example, this will change. So you have to remember with which settings it works well. And this pulse, for example, is even oscillating here. So for an electron detector, it's basically nice to count particles. For the alpha detector, it's really the case where the size of the pulse can be nicely evaluated. And we can actually do energy measurements. And these energy measurements can be also called spectrometry. So if you look closer at these many pulses that have been recorded, we find that there is really much more intensity, which means many more same pulses were detected. We can relate it to radium and radon if we use a reference alpha source. And I have done this. I've measured the whole circuit with the reference sources and provide the calibration on GitHub. And you can reuse the GitHub calibration if you use exactly the same sound settings that I have used for recording. And for example, these two very weak lines here are from two very distinctive polonium isotopes from the radium decay chain. The top part here, which is really dark, corresponds basically in the histogram view to this side on the left, which is electrons. Most of these electrons, they will actually enter the chip and leave it out without being completely absorbed by it. But alpha particles interact so strongly that they are completely absorbed within the 50 micrometers of sensitive volume of these diets. And here's a bit difficult to see peaks, but far end of the high energy spectrum you see two really clear peaks. And those can only stem from polonium, actually. We know it's uranium, and that can only be polonium, which is that isotope that produces the most energetic alpha particles, which is natural. I said if you use the same setting like me, you can use it. So the best is if you use actually the same sound card, because there, if you put it to 100% input sensitivity, you will have exactly the same result, like in my calibration case. And this sound card is pretty cheap, but also pretty good. It costs just $2 and has a pretty range and resolves quite well at 16 bits. And you can do that with an Arduino as well. It's actually a bit hard to do a really well-defined 16-bit measurement. Even at 48 kilohertz, it's not so easy. And this keeps it cheap and kind of straightforward. And you can have just some Python scripts on the computer to read it out. And this is as a reminder, in order to measure alpha particles, we have to remove the glass here on top of the diet. So I'm doing it just with cutting into the metal frame, and then the glass breaks away easily. That's not a problem. There's more on that on the Wiki. Now we can kind of compare alpha and gamma spectrometry. Here's an example. This is an uranium-placed ceramics. The red part is uranium oxide that was used to create this nice red color in the 50s, 60s, 70s. And in the spectrum, we have two very distinctive peaks, and nothing in the high energy regime. Only this low energy range has a signal. And this corresponds actually to uranium 238 and 234, because they use actually purified uranium. So all of the high energy progeny or daughters of uranium, they're not present here, because that was purified uranium. And this measurement doesn't even need vacuum. I put it just like this in a regular box. Of course, if you would have vacuum, you would improve these peaks by a lot. So this widening here to the left, basically that this peak is almost below the other one, that is due to the natural air at regular air pressure, which already interacts a lot with the particles and absorbs a lot of energy before the particles hit the sensor. So in terms of pros and cons, I would say the small sensor is quite interesting here in alpha spectrometry, because it's enough to have a small sensor, so it's cheap. And you can measure very precisely on specific spots. And on the other hand, of course, the conditions of the object influence the measurement a lot. So for example, if there's some additional paint on top, the alpha particles might not make it through. But in most of these kind of samples, alpha radiation actually makes it through the top transparent paint layer. In terms of gamma spectrometry, you would usually have these huge and really expensive sensors. And then the advantage, of course, is that you can measure regardless of your object. You don't really need to prepare the object a lot. You might want some lead shielding around it. That's again expensive, but at least you can do it. You can improve the measurement like that. And it's basically costly because the sensor is quite expensive, while versus in this setup for 15 to 30 euro, you have everything you need. And here you're looking at several hundred to several thousand euros. OK, now measuring. I have to be a bit quicker now, I notice. I told I talked already about the potassium salt. There's also fertilizer based on potassium baking powder. Rainy glass is quite nice. You can find that easily on flu markets. Often also old radium watches. Here's another example of a rain, you've glaced kitchen tile in this case. This was actually in the kitchen. So the chances are that you at home find actually some of those things in the cupboards of your parents or your grandparents. This is an example of toriated glass, which has this distinctive brownish color, which actually is from the radiation. And a nice little experiment that I can really recommend you to look up is called radioactive balloon experiment. Here you charge the balloon electrostatically and then it will catch the balloon from the air. And it's really great. You basically get a radioactive balloon after it was just left for 15 minutes in a normal room. Now as a last kind of context of all of this, to end this presentation, I want to quickly remind how important the silicon detectors are for places like CERN. This is a cross-section of the atlas detector. And here you have basically the area where the collisions happen in the atlas detector. So this is just a fraction of a meter and you have today 50 to 100 head-on collisions of two protons happening every 25 nanoseconds. Not right now, but soon again machines will be started again next year. And you also can, by the way, build a similar project, which has a slightly different name. It's called Build Your Own Particle Detector. This is atlas made out of Lego and on this website you find a nice plan how to build or ideas how to build it from Lego to better visualize the size and interact more with particle physics. In case of the CMS detector, this is the second biggest detector at CERN. Here you see nicely that in the middle at the core of the collision you have many, many pixel and micro strip detectors which are made of silicon. And these are actually 16 square meters of silicon pixel detectors and 200 square meters of micro strip detectors also made of silicon. So without basically that silicon technology, moran detectors wouldn't work because this fine segmentation is really required to distinguish all of these newly created particles as a result of the collision. So to summarize the website, some GitHub, there's really this big wiki which you should have a look at and there's a gallery of pictures from users. There's some simulation software that I used as well. I didn't develop it, but I wrote how to use it because the spec truck can sometimes be difficult to interpret. And there's a new discussions forum that I would really appreciate if some of you have some discussions there on GitHub. And most of the things I showed today are actually written in detail in a scientific article which is open access, of course. And I want to highlight two related citizen science projects. On the one hand, it's a safecast which is about a large, nice, sensitive, Geiger-Müller-based detector that has a GPS and people upload their measurements there. This is quite nice and also OpenGeiger is another website, mostly German content, but also some of it is English. That also uses diode detectors, showed many nice places. He calls it Geiger caching, so places around the world where you can measure something, some old mines, things like this. And if you want updates, I would propose to follow me on Twitter. I'm right now writing up to other articles with more ideas for measurements and some of the things you have seen today. Thanks a lot. Well, thanks a lot, Oliver. I hope everyone can hear me now again. Yes, thanks for mentioning the citizen science projects as well. It's really cool, I think. We do have a few minutes for the Q&A and also a lot of questions coming up in our instance at the IRC. So the first question was, can you talk a bit more about the SNR of the system? Did you pick particular resistor values and OPMs to optimize for noise? Was it a problem? Yes, so noise is the big issue here. Maybe the amplifier is one I found that is around four euros. I'm trying to find the slide. Yeah, we have to look it up on GitHub, the amplifier type. But this is the most important one. And then actually the resistors here, the resistors in the first stage, sorry, the capacitors is the second important thing. They should be really small since I'm limited here with hand solderable capacitors. Basically I choose the one that was just still available, let's say, and this is basically what is available is basically a 10 picofarad capacitor. You put two of them one after another, you have the capacitance, so you get five. And this, by the way, is also 10 picofarad capacitor. So I kind of try to keep the same resistor values as much as possible. And here at the output, for example, this is to adjust the output signal for a microphone input. In the Alpha spectrometer, I changed the values quite a bit to make a large pulse. But yeah, it's basically playing with the time constants of this network and this network. All right, I hope that answers the question for the person. Yeah, but people can get in contact with you right after the talk, maybe, as well. So there's another question. Have you considered using an I2S codec with a Raspberry Pi radiation marks radiation HAT should be almost no setup and completely repeatable. The last one's for comment. I don't know that component. But yeah, as I said, using a sound card is actually quite straightforward. But of course, there's many ways to get fancy. And this is really, this should actually attract teachers and high school students as well, this project. So this is one of the main reasons why certain technologies have been chosen rather simple than, let's say, fancy. Yeah, so it should work with a lot of people, I guess. And one another question was how consistent are the sound cards? Did you find the same calibration worked the same with several of them? So if you want to use my calibration, you should really buy this $2 card from eBay, the 108. I haven't seen a big difference from card to card in this one. But of course, from one computer to a mobile phone is a huge difference in input sensitivity and noise. And it's very difficult to reuse a calibration in this case. But you still can count particles. And the electron detector is anyway mostly, it actually just makes sense for counting because the electrons are not completely absorbed. So you get an energy information, but it's not the complete energy of the electron. So you could use it for x-rays, but then you need an x-ray machine. So yeah. Who doesn't need an x-ray machine, right? So maybe one question I have because I'm not very familiar with the tech stuff. And what actually can be done with it, right, in the field? So you mentioned some working with teachers with these detectors. What have you done with that in the while, so to say? So what's quite nice is you can characterize stones with it, for example. So since you can connect it to a smartphone, it's completely mobile. And it goes quite well in combination with a Geiger counter in this case. So with a Geiger counter, you just look around where is some hotspot. And then you can go closer with the Alpha spectrometer and actually be sure that there is some traces of thorium or rhenium on the stone, for example. Or in this type of ceramic, these old ceramics, you can go to the flu market and just look for these very bright red ceramics and measure them on the spot and then decide which one to buy. Okay. So that's what I'm going to do with that. Right. Thanks for highlighting a bit the practical. I think it's really cool to educate people about some scientific things as well. Another question from the ISE, didn't you have problems with common mode rejection while connecting your device to the sound card? At VES have you tried to do a AD conversion, digitization on the board itself already? Transfer wire SP diff? Of course. Yeah. So of course, I mean, this is the thing to do if you want to make a like a super stable rock solid measurement device. But it is really expensive. I mean, we are looking here at 15 euros and yeah, that's the reason to have this separate sound card just to enable with very low resources to do this. But I'm looking for these pulses here. So this common mode rejection is a problem and also this kind of over-swinger, I'm missing the English term now. Yeah, this kind of oscillations here, if you design a specific analog to digital conversion, of course, you would take all of that into account and it wouldn't happen. But here this happens because the circuit can never be exactly optimal for a certain sound card input. It will always be some mismatch of impedances. All right, so maybe these special technical issues and details, this could be something you could discuss with Oliver on Twitter or maybe Oliver, you want to join the IRC room for your talk as well. People were very engaged wearing your talk. So this is always a good sign. In that sense, I'd say thank you for being part of this first remote case experience. Thanks again for your talk and for taking the time and yeah, best for you and enjoy the rest of the conference, I'd say, of the Congress. And a warm round of virtual applause and big thank you to you Oliver. Thanks. I will try in the chat room right now. Thank you.
|
This talk gives a brief introduction to natural radioactivity and shows how a detector can be built from simple photodiodes. The electronics are easy to solder for beginners and provide a hands-on opportunity for entering the exciting world of modern particle physics with a practical DIY and citizen science approach. Natural radioactivity surrounds us everywhere and is composed of different kinds of ionising radiation or subatomic particles. This talk presents a DIY particle detector based on low-cost silicon photodiodes and its relations to modern detectors like the ones developed at CERN. The project is open hardware, easy to solder for beginners, and intended for citizen science & educational purposes. In contrast to simpler Geiger-Müller counters, this detector measures the energy of impinging particles and can distinguish in particular between alpha particles and electrons from beta decays. A cheap USB sound card or smartphone headset connection can be employed for recording the signals. Corresponding data analysis scripts written in python and related tools from particle physics research will be briefly discussed. An introduction about terrestrial and cosmic sources of radioactivity will be given together with details on the interaction of ionising radiation in silicon. I will conclude with a discussion of interesting every-day objects that are worthwhile targets for investigation and show example measurements of characteristic alpha particle energy spectra - without using expensive vacuum equipment.
|
10.5446/52089 (DOI)
|
So about the next speaker, he's a security researcher focused on embedded systems, secure communications and mobile security. He was nominated by Forbes for the 30 under 30 in technology and also has won a Oba's AppSex CTF. He has also found and disclosed responsibly multiple vulnerabilities and especially for you Nintendo fiction others, I want you to watch out for the next intro which is really amazing and you will all love. Thank you very much. What a trip. Welcome to my talk on hacking the new Nintendo Game and Watch Super Mario Brothers. My name is Thomas Roth and I'm a security researcher and trainer from Germany and you can find me on Twitter at Gidra Ninja and also on YouTube at Stax Machine. Now this year marks the 35th anniversary of our favorite plumber Super Mario and to celebrate that, Nintendo launched a new game console called the Nintendo Game and Watch Super Mario Brothers. The console is lightweight and looks pretty nice and it comes pre-installed with three games and also this nice animated clock. The three games are Super Mario Brothers, the original NES game, Super Mario Brothers 2, The Lost Levels and also a reinterpretation of an old Game and Watch game called Ball. Now as you probably know, this is not the first retro console that Nintendo released. In 2016 they released the NES Classic and in 2017 they released the SNES Classic. Now these devices were super popular in the homebrew community because they make it really easy to add additional ROMs to it, they make it really easy to modify the firmware and so on and you can basically just plug them into your computer, install a simple software and you can do whatever you want with them. The reason for that is that they run Linux and have a pretty powerful ARM processor on the inside and so it's really a nice device to play with and so on and so when Nintendo announced this new console a lot of people were hoping for a similar experience of having a nice mobile homebrew device. Now if you were to make a Venn diagram of some of my biggest interests you would have reverse engineering, hardware hacking and retro computing and this new Game and Watch fits right in the middle of that and so when it was announced on the 3rd of September I knew that I needed to have one of those and given how hard the NES and the SNES Classic were to buy for a while I pre-ordered it on like four or five different sites, a couple of which got cancelled but I was pretty excited because I had three pre-orders and it was supposed to ship on the 13th of November and so I was really looking forward to this and I was having breakfast on the 12th of November when suddenly the doorbell rang and DHL delivered me the new Game and Watch one day before the official release. Now at that point in time there was no technical information available about the device whatsoever like if you searched for Game and Watch on Twitter you would only find the announcements or maybe a picture of the box of someone who also received it early but there were no tear downs, no pictures of the insides and most importantly nobody hacked had hacked it yet and this gave me as a hardware hacker the kind of unique opportunity to potentially be the first one to hack a new Nintendo console and so I just literally dropped everything else I was doing and started investigating the device. Now I should say that normally I stay pretty far away from any new console hacking mainly because of the piracy issues. I don't want to enable piracy, I don't want to deal with piracy and I don't want to build tools that enable other people to pirate stuff basically but given that on this device you cannot buy any more games and that all the games that are on there were basically already released over 30 years ago I was not really worried about piracy and felt pretty comfortable in sharing all the results of the investigation and also the basically the issues we found that allowed us to customize the device and so on and in this talk I want to walk I want to walk you through how we managed to hack the device and how you can do it at home using relatively cheap hardware and yeah hope you enjoy it. Now let's start by looking at the device itself. The device is pretty lightweight and comes with a nicely sized case and so it really for me it sits really well on my hand and it has a nice 320x240 LCD display, a D-pad, A and B buttons and also three buttons to switch between the different game modes. On the right side we also have the power button and the USB-C port. Now before you get excited about the USB port I can already tell you that unfortunately Nintendo decided to not connect the data lines of the USB port and so you can really only use it for charging. Also because we are talking about Nintendo here they use their proprietary tri-point screws on the device and so to open it up you need one of those special tri-point bits. Luckily nowadays most bit sets should have them but it still would suck if you order your unit and then you can't open it up because you're missing a screwdriver. After opening it up the first thing you probably notice is the battery and if you've ever opened up a Nintendo Switch Joy-Con before you might recognize the battery because it's the exact same one that's used in the Joy-Cons. This is very cool because if down the line like let's say in two or three years your battery of your Game & Watch dies you can just go and buy a Joy-Con battery which you can have really cheaply almost anywhere. Next to the battery on the right side we have a small speaker which is not very good and underneath we have the main PCB with the processor, the storage and so on and so forth. Let's take a look at those. Now the main processor of the device is an STM32 H7B0. This is a Cortex M7 from ST Microelectronics with 1.3 megabytes of RAM and 128 kilobytes of flash. It runs at 280 MHz and is a pretty beefy microcontroller but it's much less powerful than the processor in the NES or SNES Classic. Like this processor is really just a microcontroller and so it can't run Linux, it can't run let's say super complex software instead it will be it will be programmed in some bare metal way and so we will have a bare metal firmware on the device. To the right of it you can also find a 1 megabyte SPI flash and so overall we have roughly 1.1 megabyte of storage on the device. Now most microcontrollers or basically all microcontrollers have a debugging port and if we take a look at the PCB you can see that there are five unpopulated contacts here and if you see a couple of contacts that are not populated close to your CPU it's very likely that it's the debugging port and luckily the datasheet for the STM32 is openly available and so we can check the pinouts in the datasheet and then use a multimeter to to see whether these pins are actually the debugging interface and turns out they actually are and so we can find the SWD debugging interface as well as VCC and ground exposed on these pins. Now this means that we can use a debugger so for example a J-Link or an ST-Link or whatever to connect to the device and because the the contacts are really easy to access you don't even have to solder like you can just hook up a couple of test pins and they will allow you to to easily hook up your debugger. Now the problem is on most devices the debugging interface will be locked during manufacturing this is done to prevent people like us to basically do whatever with the device and to prevent us from being able to dump the firmware potentially reflash it and so on and so I was very curious to see whether we can actually connect to the debugging port and when starting up J-Link and trying to connect we can see it can actually successfully connect but when you take a closer look there's also a message that the device is active-reprotected. This is because the chip the STM32 chip features something called RDP protection level or read out protection level. This is basically the the security setting for the debugging interface and it has three levels. Level zero means no protection is active. Level one means that the flash memory is protected and so we can't dump the internal flash of the device however we can dump the RAM contents and we can also execute code from RAM and then there's also level two which means that all debugging features are disabled. Now just because a chip is in level two doesn't mean that you have to give up for example in our talk wallet.fail a couple of years ago we showed how to use fault injection to bypass the level two protection and downgrade a chip to level one however on the game and watch we are lucky and the interface is not fully disabled instead it's in level one and so we can still dump the RAM which is a pretty good entry point even though we can't dump the firmware yet. Now having dumped the RAM of the device I was pretty curious to see what's inside of it and one of my suspicions was that potentially the emulator that's hopefully running on the device loads the original Super Mario Brothers ROM into RAM and so I was wondering whether maybe we can find the ROM that the device uses in the RAM dump and so I opened up the RAM dump in a in a hex editor and I also opened up the original Super Mario Brothers ROM in a second window in a hex editor and tried to find different parts of the original ROM in the RAM dump and it turns out that yes the NES ROM is loaded into RAM and it's always at the same address and so it's probably like during boot up it gets copied into RAM or something along those lines and so this is pretty cool to know because it tells us a couple of things. First off we know now that the debug port is enabled and working but that it's unfortunately at RDP level one and so we can only dump the the RAM and we also know that the NES ROM is loaded into RAM and this means that the device runs a real NES emulator and so if we get okay we can for example just replace the ROM that is used by the by the device and play for example our own NES game. Next was time to dump the flash chip of the device. For this I'm using a device called Mini Pro and I'm using one of these really useful SOIC8 clips and so these ones you can simply clip onto the flash chip and then dump it. Now one warning though the flash chip on the device is running at 1.8 volts and so you want to make sure that your programmer also supports 1.8 volt operation. If you accidentally try to read it out at 3.3 volts you will break your flash trust me because it happened to me on one of my units. Now with this flash dump from the device we can start to analyze it and what I always like to do first is take a look at the entropy or the randomness of the flash dump and so using binwalk with the dash uppercase e-option we get a nice entropy graph and in this case you can see we have a very high entropy over almost the whole flash contents and this mostly indicates that the flash contents are encrypted. It could also mean compression but if it's compressed you would often see more like dips in the entropy and in this case it's one very high entropy stream. We also notice that there are no repetitions whatsoever which also tells us that it's probably not like a simple XOR based encryption or so and instead something like AES or something similar but just because the flash is encrypted doesn't mean we have to give up. On the contrary I think now it starts to get interesting because you actually have a challenge and it's not just plug and place so to say. One of the biggest questions I had is is the flash actually verified like does the device boot even though the flash has been modified because if it does this would open up a lot of attack vectors basically as you will see and so to verify this I basically try to put zeros in random places in the flash image and so I put some at address zero some at hex 2000 and so on and then I checked whether the device would still boot up and with the most flash modifications it would still boot just fine. This tells us that even though the flash contents are encrypted they are not validated they are not checked something or anything and so the device and so we can potentially trick the device into accepting a modified flash image and this is really important to know as you will see in a couple of minutes. My next suspicion was that maybe the NES ROM we see in RAM is actually loaded from the external flash and so to to find out whether that's the case I again took the flash and I inserted zeros at multiple positions in the flash image, flashed that over, booted up the game, dumped the RAM and then compared the NES ROM that I'm now dumping from RAM with the one that I dumped initially and checked whether they are equal because my suspicion was that maybe I can overwrite a couple of bytes in the encrypted flash and then I will modify the NES ROM and after doing this for like I don't know half an hour I got lucky and I modified four bytes in the flash image and four bytes in the RAM sorry in the ROM that was loaded into RAM changed and this tells us quite a bit it means that the ROM is loaded from flash into RAM and that the flash contents are not validated and what's also important is that we we change four bytes in the flash and now four bytes in the decrypted image changed and this is very important to know because if we take a look at what we would expect to happen when we when we change the flash contents there are multiple outcomes and so for example here we have the spy flash contents on the left and the RAM contents on the right and so the RAM contents are basically the decrypted version of the spy flash contents now let's say we change four bytes in the encrypted flash image to zeros how would we expect the RAM contents to change for example if we would see that now 16 bytes in the RAM are changing this means that we are potentially looking at an encryption algorithm such as AES in electronic codebook mode because it's a block based encryption and so if we change four bytes in the input data a block size in this case 16 bytes in the output data would change the next possibility is that we change four bytes in the spy flash and all data afterwards will be changed and in this case we would look at some kind of chaining cipher such as AES in cbc mode however if we change four bytes in the spy flash and only four bytes in the RAM changed we are looking at at something such as AES in counter mode and to understand this let's take a better look at how AES in CTR works AES CTR works by having your clear text and X oring it with an AES encryption stream that is generated from a key and nonce and a counter algorithm now the AES stream that will be used to X or your clear text will always be the same if key and nonce is the same this is why it's super important that if you use AES CTR you always select a unique nonce for each encryption if you encrypt similar data with the same nonce twice large parts of the resulting cipher text will be the same and so the clear text gets X ored with the AES CTR stream and then we get our cipher text now if we know the clear text as we do because the clear text is the ROM that is loaded into RAM and we know the cipher text which we do because it's the contents of the encrypted flash we just dump we can basically reverse the operation and as a result we get the AES CTR stream that was used to encrypt the flash and now this means that we can take for example a custom ROM X or it with the AES CTR stream we just calculated and then generate our own encrypted flash image for example with a modified ROM and so I wrote a couple of Python scripts to try this and after a while I was running hacked Super Mario Brothers instead of Super Mario Brothers so woohoo we hacked the Nintendo Game & Watch one day before the official release and we can install modified Super Mario Brothers ROMs now you can find the scripts that are used for this on my github so it's in a repository called Game & Watch Hacking and I was super excited because it meant that I succeeded and that I basically hacked a Nintendo console one day before the official release unfortunately I finished the level but Toad wasn't as excited he told me that unfortunately our firmware is still in another castle and so on the Monday after the launch of the device I teamed up with Conrad Beckman a hardware hacker from Sweden who I met at the previous congress and we started chatting and throwing ideas back and forth and so on and eventually we noticed that the device has a special RAM area called ITCM RAM which is a tightly coupled instruction RAM that is normally used for very high performance routines such as interrupt handlers and so on and so it's in a very fast RAM area and we realized that we never actually looked at the contents of that ITCM RAM and so we dumped it from the device using the debugging port and it turns out that this ITCM RAM contains ARM code and so again the question is where does this ARM code come from does it maybe just like the NES ROM come from the external flash and so basically I repeated the whole the whole thing that we also did with the NES ROM and it just put zeros at the very beginning of the encrypted flash rebooted the device and dumped the ITCM ROM and I got super lucky on the first try already the ITCM contents changed and because the ITCM contains code not just data so earlier we only had the the NES ROM which is just data but this time the RAM contains code this means that with the same XOR trick we used before we could inject custom ITCM code into the external flash which would then be loaded into RAM when the device boots and because it's a persistent method we can then reboot the device and let it run without the debugger connected and so whatever code we load into this ITCM area will be able to actually read the flash and so we could potentially write some code that gets somehow called by the firmware and then copies the internal flash into RAM from where we then can retrieve it using the debugger now the problem is let's say we have a custom payload in somehow in this ITCM area we don't know which address of this ITCM code gets executed and so we don't know whether the firmware will jump to address zero or address 200 or whatever but there's a really simple trick to still build a successful payload and it's called a knob slide a knob or a no operation is an instruction that simply does nothing and if we fill most of the ITCM RAM with knobs and put our payload at the very end we build something that is basically a knob slide and so when the CPU indicated by Mario here jumps to a random address in that whole knob slide it will start executing knobs, knobs, knobs, knobs and slide down into our payload and execute it and so even if Mario jumps right in the middle of the knob slide he will always slide down the slide and end up in our payload and Conrad wrote this really really simple payload which is only like 10 instructions which basically just copies the internal flash into RAM from where we can then retrieve it using the debugger so woohoo super simple exploit we have a full firmware backup and a full flash backup and now we can really fiddle with everything on the device and we've actually released tools to do this yourself and so if you want to backup your Nintendo Game and Watch you can just go onto my github and download the Game and Watch backup repository which contains a lot of information on how to back it up it does it does check something and so on to ensure that you don't accidentally brick your device and you can easily backup the original firmware, install homebrew and then always go back to the original software we also have an awesome support community on discord and so if you ever need help you will I think you will find success there and so far we haven't had a single brick game and watch and so it looks to be pretty stable and so I was pretty excited because the quest was over or is it if you ever claim on the internet that you successfully hacked an embedded device there will be exactly one response and one response only but does it run do literally my twitter DMs my youtube comments and even my friends were spamming me with the challenge to get doom running on the device but to get doom running we first needed to bring up all the hardware and so we basically needed to create a way to develop and load homebrew onto the device now luckily for us most of the components on the board are very well documented and so there are no nda components and so for example the processor has an open reference manual and open source library to use it the flash is a well-known flash chip and so on and so forth and there are only a couple of very proprietary or custom components and so for example the lcd on the device is proprietary and we had to basically sniff the spi bus that goes to the display to basically decode the the initialization of the of the display and so on and after a while we had the full hardware running we had lcd support we had audio support sleep support buttons and even flashing tools that allow you to simply use an swd debugger to dump and rewrite the external flash and you can find all of these things on our github now if you want to mod your own game and watch all you need is a simple debugging adapter such as a cheap three dollar st link a jlink one s or a real st link device and then you can get started we've also published a base project for anyone who wants to get started with building their own games for the game and watch and so it's really simple it's just a frame buffer you can draw to input is really simple and so on and as said we have a really helpful community now with all the hardware up and running i could finally start porting doom now i started by looking around for other ports of doom 2 and stm32 and i found this project by floppy called stm32 doom now the issue is stm32 doom is designed for a board with eight megabytes of ram and also the data files for doom were stored on an external usb drive on our platform we only have 1.3 megabytes of ram 128 kilobytes of flash and only one megabyte of external flash and we have to fit all the level information all the code and so on in there now the doom level information is stored in so called wad what where's all my data files and these data files contain the sprites the textures the levels and so on now the wad for doom 1 is roughly four megabytes in size and the wad for doom 2 is 14 megabytes in size but we only have 1.1 megabyte of storage plus we have to fit all the code in there so obviously we needed to find a very very small doom what and as it turns out there's a thing called mini what which is a minimal doom i what which is basically all the bells and whistles stripped from the wad file and everything replaced by simple outlines and so on and while it's not pretty i was pretty confident that i could get it working as it's only 250 kilobytes of storage down from 14 megabytes now in addition to that a lot of stuff on the chocolate doom port itself had to be changed and so for example i had to rip out all the file handling and add a custom file handler i had to add support for the gamut watch lcd button import support and i also had to get rid of a lot of things to get it running somewhat smoothly and so for example the infamous wipe effect had to go and i also had to remove sound support now the next issue was that once it was compiling it simply would not fit into ram and crash all the time now on the device we have roughly 1.3 megabytes of ram in different ram areas and for example just the frame buffer that we obviously need takes up 154 kilobytes of that then we have 160 kilobytes of initialized data 320 kilobytes of uninitialized data and a ton of dynamic allocations that are done by chocolate doom and these dynamic allocations were a huge issue because the chocolate doom source code does a lot of small allocations which are only used for temporary data and so they get freed again and so on and so your dynamic memory gets very very fragmented very quickly and so eventually there's just not enough space to for example initialize the level and so to fix this i took the chocolate doom code and i changed a lot of the dynamic allocations to static allocations which also had the big advantage of making the error messages by the compiler much more meaningful because it would actually tell you hey this and this data does not fit into ram and eventually after a lot of trial and error and copying as many of the original assets as possible into the minimal iWatt i got it i had doom running on the nintendo game and watch super mario brothers and i hopefully calmed the internet gods that forced me to do it now unfortunately the usb port is physically not connected to the processor and so it will not be possible to hack the device simply by plugging it into your computer however it's relatively simple to do this using one of these usb debuggers now the most requested type of home brew software was obviously emulators and i'm proud to say that by now we actually have kind of a large collection of emulators running on the nintendo game and watch and it all started with konrad backman discovering the retro go project which is an emulator collection for a device called the odroid go and the odroid go is a small handheld with similar input and size constraints as the nintendo game and watch and so it's kind of cool to port this over because it it basically already did all of the hard work so to say and retro go comes with emulators for the nes for the gameboy and the gameboy color and even for the sega master system and the sega game gear and after a couple of days konrad actually was able to show off his nes emulator running zelda and other games such as contra and so on on the nintendo game and watch this is super fun and initially we only had really a basic emulator that you know could barely play and we had a lot of frame drops we didn't have nice scaling v-sync and so on but now after a couple of weeks it's really a nice device to use and to play with and so we also have a gameboy emulator running and so you can play your favorite gameboy games such as pokemon super mario land and so on on the nintendo game and watch if you own the corresponding rom backups and we also experimented with different scaling algorithms to make the most out of the screen and so you can basically change the scaling algorithm that is used for the display depending on what you prefer and you could even change the palette for the different games we also have a nice game chooser menu where which allows you to basically have multiple roms on the device that you can switch between we have safe state support and so if you turn off the device it will save wherever you left off and you can even come back to your save game once the battery run out you can find the source code for all of that on the retro go repository from konrad and it's really really awesome other people build for example emulators for the chip 8 system and so the chip 8 emulator comes with a nice collection of small arcade games and so on and it's really fun and really easy to develop for and so you really give this a try if you own a game and watch and want to try homebrew on it team sure being is even working on an emulator for the original game and watch games and so this is really cool because it basically turns the nintendo game and watch into an emulator for all game and watch games that were ever released and what was really amazing to me is how the community came together and so we were pretty open about the progress on twitter and also konrad was twitch streaming a lot of the process and we open up a discord where people could join who are interested in hacking on the device and it was amazing to see what came out of the community and so for example we now have a working storage upgrade that works both with homebrew but also with the original firmware and so instead of one megabyte of storage you can have 60 megabytes of flash and you just need to to replace a single chip which is pretty as easy to do then for understanding the full hardware daniel kathbert and daniel padilla provided us with high resolution x-ray images which allowed us to fully understand every single connection even of the bga parts without desoldering anything then jake little of upcycle electronics traced on the x-rays and also using a multimeter every last trace on the pcb and he even created a schematic of the device which gives you all the details you need when you want to program something or so and was really really fun sender vanderville for example even created a custom backplate and now there are even projects that try to replace the original pcb with a custom pcb with an fpga and an esp32 and so it's really exciting to see what people come up with now i hope you enjoyed this talk and i hope to see you on our discord if you want to join the fun and thanks you for coming hi um wow that was a really amazing talk thank you very much uh thomas um as announced in the beginning uh we do accept questions from you and uh we have quite quite a few um let's see if we manage to make it through all of them um the first one is um did you read the articles about nintendo observing hackers like private investigators etc and are you somehow worried about this oh what's going on with my camera looks like luigi messed around with my video setup here um aha yeah i've so i've read those articles but um so i believe that in this case the there is no piracy issue right like i'm not allowing anyone to play any new games if you wanted to to dump a super mario rom you would have done it 30 years ago or on the nes classic or on the switch on any of the 100 consoles nintendo launched in between and so i'm really not too worried about it to be honest um i know something the the aspect of of the target audience is is to be seen here so off to the next question which is um is there do you think that there is a reason why an external flash chip has been used yeah so the internal flash of the s3 2 h7 b0 is relatively small it's only 128 kilobytes and so they simply couldn't fit everything in like basically even just the frame buffer um even just a frame buffer picture also is larger than the internal flash and so um i think that's why they did it and i'm glad they did yeah sure um and um is the decryption done in software or is it a feature of the microcontroller so the microcontroller has an integrated feature called otf dc otf deck and basically the flash is directly mapped into memory and they have this chip peripheral called otf deck that automatically provides the decryption and so on and so it's done all in hardware um you can even retrieve the keys from hardware basically oh okay very nice and also um and the next question is somehow related to that uh is uh in your opinion the encryption Nintendo has applied uh even worth the effort for them it feels like it's just there to give shareholders a false sense of security how what would you think about that i think from my perspective they choose just the right encryption because it was a ton of fun to reverse engineer and try to to bypass it and so uh it was an awesome challenge and so i think they did everything right but um i also think in the end it's it's such a simple device and it's like if you take a look at what people are building on top of it with like games and and all the kind of stuff um i think they did everything right but probably it was just a tick mark of yeah we totally locked down jtech and yeah but i think it's fun because again like it doesn't open up any piracy issues sure and the one thing is related to the nopslite which you very very well animated um so um wouldn't wouldn't start of sub-routines be suitable as well for that uh for that uh goal um uh the the person asking uh says that a big push our far our four our five etc instructions are quite recognizable um how would yeah yeah so um absolutely so um the the time from finding the data in the itcm ram and actually exploiting it was less than an hour and so um if we would have tried to reverse engineer it would be more work like absolutely possible and also not difficult but just filling the ram with nop took a couple of minutes and so um it was really the easiest way in the fastest way um without fiddling around in gdra also okay cool thanks and uh this is more a remark than a question um the person says it's strange that an sts an 52 81 does not mention a single time that the data is not verified during encryption i think it's more a fault on sts than nintendo's side what would you uh think about that yeah i would i would somewhat agree because in this case even if you don't have jtag like an arm thump instruction is two to four bytes and so you you you have a relatively small space to brute force to potentially get an interesting branch instruction and so on so i think it's yeah i mean it's it's not perfect but also um doing verification is very expensive computational wise and so i think it should just be the the firmware that actually verifies the contents of the external flash okay um so we i think we should two questions more and then we can go back to the studio um the there's a question about the is encryption key um have you managed to recover them yes we did um but so there is an application of by st and they do some crazy shifting around with the keys but i think even just today like an hour before the talk or so a guy who's also or sorry i'm not sure this guy uh a person on our discord actually managed to to rebuild the full encryption but we i personally was never interested in that because um after you've downgraded to rdp0 the device you can just access the memory map flash and get the completely decrypted flash contents basically sure thanks and last question about the lcd controller um whether it's used by writing pixels over spi or if it has some extra features maybe even backgrounds or sprites or something like that so um the the lcd itself doesn't have any special features it has a one spi bus to configure it and then a parallel interface where you uh so it takes up a lot of pins but the chip itself has a hardware called lcdc which is an lcd controller which provides two layers with alpha blending and some basic windowing and so on okay cool then thank you very very much for the great talk and the great intro and with that back to our main studio in the orbit thank you very much back to orbit so You
|
On November 13., Nintendo launched its newest retro console, the Nintendo Game and Watch - but by then it was already hacked! In contrast to the other Nintendo classic consoles (NES & SNES), Nintendo upped their game this time: A locked processor, AES-CTR encrypted flash & co. made it significantly harder to hack it, but in the end it was still hacked - one day before release. This talk walks through the whole process of opening it up, exploiting the firmware up to bringing homebrew to a new console - in a fun, beginner friendly way. The Nintendo Game & Watch was anticipated by a lot of retro-interested folks, and the clear expectation was: We wan't to get more games onto this device! But Nintendo made the life of hackers harder: The CPU is locked, the external flash AES encrypted, and the USB-C connector does not have its data-lines connected. But not so fast! In this talk we learn how to exploit the firmware, get code-execution via a NOP-slide, dump the ROMs & RAMs of the device and achieve what everyone has been asking for: DOOM running on the Nintendo Game & Watch.
|
10.5446/52091 (DOI)
|
Okay, as we said the years before, the force merged to main. Andi is commonly known in our scene. His current talk CIA vs WikiLeaks, Intimidation, Surveillance and Other Tactics, Observed and Experienced. In his talk, Andi aims to report and shows a collection of his observations, physical, visual and other evidences of the last year incidents that strongly indicate the context of US Central Agency, Central Intelligence Agency and potentially other entities of the US government acting against WikiLeaks and surrounding persons and organizations. Please welcome with a very warm digital applause Andi. Okay, I have no idea how a digital applause works here, but thanks for it anyhow. At the beginning I want to make and I have to make a few disclaimers so that you know which perspective you're getting here. I'm working as a data journalist for quite a while around the topics of surveillance, signal intelligence, data security and running this funny bug planet. Even started that bugplanet.info before Snowden came with all his documents, but I did work a while with his documents. However, this talk is a bit different as I'm not talking about things that I learned and studied or whatever, but I experienced myself. I'm describing events here where I was targeted. And so I might not be the most neutral person in this scenario, but I'm trying to be technically as accurate as possible anyhow. So forgive me if I'm a bit grumpy about these people. That's just because of the perspective. Secondly, well, I've also, and the CCC of course has been addressing human rights issues in the digital age for a long time. We and I personally co-founded EG, the European Digital Rights Initiative, to ensure the enforcement of human rights in the digital environment. However, what happened here is slightly beyond digital rights. It goes into real life. And while I'm a German citizen, I now know roughly what kind of laws have been violated in respect to the German environment. I absolutely would welcome people who help me analyze and understand it from the perspective of the universal human rights because there is similar cases with people living in other jurisdictions and so on. Second slide of disclaimer. Sorry, I let it so much. So I'm addressing with this talk activities against people surrounding and have been ent or surrounding Julian and our WikiLeaks and or other members of WikiLeaks. Whatever I describe here is I have personally observed and experienced it. So it is for sure very incomplete. It's at best a fragment of what's gone on. But you will, in case you haven't heard about it yet, that Pompeo made some very clear statements when he was head of CIA is pretty clear where to attribute these things. And lastly, there is, of course, other prisoners mentioned, but I'm keeping them out here for all kinds of reasons. But there will be the time when we will hear more reports and other perspective of this particular situation. So here's my little overview. I want to get you an idea how to get into such a mess just in case you wanted. The context of the timeline, a bit of psychology as it's important because at some point you not only get paranoid, you have this drive to, no, this can't be true, right? You have this cognitive dissonance drive inside of you that you would like to stay sane. The new normal of IT incidents, we're all used to that, COVID versus overt. What I mean with the term intimidation of violence, physical events and their impact about the elephant in the room, the problem of the missing socks. And at the end, a little bit of questions. Some of my infectious, how to get out of this mess maybe also. So how to get into such a beautiful mess? Wait, it's not beautiful. Well, there's some ideas we share in the hacker community usually. And even it's not far from where to get into the journalist community. Information should be free. Free flow of information is a bit of a requirement for world peace. And we had this, and I personally also had this type of self-conception, self-understanding consciousness 20 years already when Wickeleek started around 2006. So this is not that I was jumping or anybody in the scene was jumping onto something that didn't exist until then. But Wickeleek's turned out to be an extremely good concept as a democracy test if governments cannot deal with full transparency. Well, that tells you a lot about them. And but of course, that is similar to jumping into the last one similar to working in journalism. When you expose things in journalism, be it corruption, be it hypocrisy of politicians, be it blunt lies or whatever, it's not always about making friends. It's yes, partly making friends and partly pissing people off. That happens. However, in this particular environment that Julian inspired to create, there's some, yeah, cultural even misunderstandings. For example, the word conspiracy for us in Europe, I think many of us in the German Hacker scene are inspired by Robert Wilson's way of saying, oh, a conspiracy is like the world is full of them and we should join the best of them. But the in the American context, the word conspiracy is a legal term, unfortunately. And when you are with American citizens in a room and talk about conspiracies, they often get very nervous and it's kind of a complete different attitude because it's like the US term to define people who belong to a group like organized criminals or organized, you know, this T word, this other type of entities. And of course, that's absolutely not what we want to get into involved to. But sometimes we mistakenly or misunderstanding the joke about conspiracies and people believe listening to this, get it completely wrong. And I fear that this is also what happened and how me and others got into such a mess. So at the end of course, I'm journalism and that's similar to dealing with data from a hacker's perspective is about supporting media with data and information and so on. So here's a bit of a timeline to give you a timeframe. I'm now after I was a bit long for about two decades in CCC spokesperson and board member blah, blah, blah. I moved to the board of the Vahalan Foundation. Vahalan Foundation collects actually money for WikiLeaks under the aspect of vows idea of supporting freedom of information. Since 2010 or so, I joined a little later. However, when WikiLeaks started to publish the Afghanistan, the Iraq warlocks, the diplomatic cables, that already triggered legal investigations. And of course, the arrest of Dan Stoll Bradley now, Josiah Manning later. So there was always it was always clear, more or less right from the beginning that there's legal trouble on the way that there's a secret grand jury and that the Americans didn't really appreciate their war crimes to be exposed and their diplomatic cables to be in the Internet, to be understood and readable for all of us and the media worldwide and so on. Of course, when people come together and gather in any project, you have human beings, you have they have characters, they have mistakes. They do things that are not always great. So I'm not trying to say here that everything was always great and was only the CIA messing it up. No humans make mistakes. And these mistakes in such an environment, of course, get exploited, get amplified and so on. In 2007, WikiLeaks started publishing some CIA documents and a whole series of it, the so-called fold seven documents. And those documents describe their technology, exploit programs from the CIA. You probably most of you will know them. If not, you can all look them up. These included tools that allowed the CIA to pretend to be someone else, including coming from another country, speaking another language, be it from Russia and Russia and be it from Iran and Farsi and so on. And Pompeo, who was at that moment still head of the CIA, got very upset. There's two references from this one is from April to 17. And another is from February to 18. In his first public speech as a CIA director in 13 of April to 17, he made a speech at a conference in Washington, and he said things like WikiLeaks walks like a hostile intelligence service and talks like an intelligent service and called WikiLeaks a non-state hostile intelligence service. So for those of you who know a little bit about information science, this idea of data is actually something you can technically measure information as data in a context. And intelligence is information processed to a level where you can make decisions based on it. So being a public intelligence service, I would say from that perspective is like a honorable term. However, the way Pompeo emphasized it, I think, was slightly not that honorable. He was more comparing it to other state actors and evil forces and so on because the US understanding of intelligence services far away from entities, sorry, I need a water, is far away from entities just collecting information. But as you know, they also mess up with other people's life and so on. However, a year later, in February to 18, he even upgraded this type of statements at the site, German newspaper reported about what he said at the Munich Security Conference Intelligence Roundtable. And he said really nasty sentence like that. He's most of his time, he's dealing with the non-state actors and that's like, I can either Islamic State WikiLeaks or is Bollack. Like what a list. So I have no idea what he has, what turned him into comparing these kind of things. I mean, his Bollack, I could say we in Berlin, we know that they provide actually Jami Halloumi and some things. But yes, they are money launders and are suspected terrorists in some areas or whatever. I have been declared terrorist. But their humus is really good, I can say. However, the point I'm trying to come to, so Pompeo got very upset. He made all this comparison and he seems to have allocated resources to deal with WikiLeaks and everybody jumping around. And it's no surprise that S.W. Holland Foundation finances selected activities of specific publications there that we also got in the focus with us collecting donations and talking with the guys and financing some projects. So, before I'm coming to very concrete events, I want to get one second into psychology. So of course, when things happen to you from the intelligence perspective, they always come with what's called plausible deniability. And there's a guy standing in front of the door watching, if you come in and out, it's not just someone watching your door or someone reading the newspaper or repairing some electrical pipes or some water pipe or whatever. I mean, there's always a good reason for him to be there that has nothing to do with what he's doing. And that's basic principle, plausible deniability, how intelligence agencies act in the so-called field, meaning in your home or on the street, they're following you or whatever. So over time, of course, if you have too much of this, you're seeing these patterns. And that's probably mainly called paranoia. So you get like, you know, suspicious of everything that happens that might be very legitimate, but you get like the feeling that something is wrong and so on. And that can be, we could also instead of paranoia, call it situational awareness at some points, because if it really happens, it has nothing to do with your mind getting crazy. It's just an accurate observation of patterns that happens around of you. But you might know that, and your two friends who experienced the same might know that, your girlfriend, your partner, the normal people you deal with, they might all not understand this and think that you're driving nuts. And this driving nuts is of course an element that you always have to be self-critical, because on the one hand side, you might indeed see too much things happening that do not really happen. On the other hand side, there's also this human drive that we don't want this CIA guys to be in our life. We want everything is to be fine. And to some extent, maybe that's even healthy to not see the monsters all the time. But if they are really there and you start denying them while they sit in front of you, that's also not so helpful. So I found myself in this kind of weird environment where all these kind of thoughts come up all the time. And I'm starting with the most harmless stuff. So internet attacks. I will, our internet incidents, I would, IT incidents, I call it here. Due to the pure volume of it, I will put this into separate presentation one day or report and in the next days or weeks or months. And you can all have fun with it. But she has just some basic patterns. So devices you use as communication terminals or communication devices, they always have issues when you start to do encrypted stuff. And it's always when you do it with specific people. So that's then mobile phones with data service. At some point, all of them have started to have issues. Very high volume of use data. Apps disappear if you use them at all. I stopped using them at all. Now high battery use it when you did nothing with your phone over hours and you were wondering what's going on. Okay, yes, we have Faraday bags. We put them somewhere else. But still it's a little weird when your battery is empty half day. On LTE, when I configured my phone to be on LTE only, it worked mainly fine next to that I couldn't make normal phone calls. But when I had it in the normal mode, it got downgraded to 3G and there my encrypted connection started to have problems. On my fixed lines, my VPNs, when I try to build up a VPN, it shows me certificate errors and problems. And then of course, you deal with journalists, which I'm doing with my colleagues all the time and they are not technical experts. They all have their Macs and so on. So they have funny issues with their PGP keys not working anymore with their PGP setups not working anymore. Yes, it's also because it's open source software, but there's also something going on. But this is kind of the world we all know and we get used to it. This is like, okay, IT doesn't work, secure connections break well happens all the time. From about mid to 17 when I still regularly like once twice a month was flying over to see Julian in the embassy, I realized that there was something changing it with my treatment at the border. That's of course, that's here. That's purely UK border police uncles and they like asked started to ask funny questions like do you live in the UK? What's your occupation? How long do you stay? What do you do in the UK? Before then, there was maybe one question but not three, four of them. And the most important was that I realized that he did not even listen to my answers. Sometimes he started the first question after I answered the third and I was feeling like in a limbo like with a machine who would randomly ask me things. But I then realized he was just waiting for the green light on his screen suit to let me go and that green light probably meant that the team outside was ready to pick me up. And that's what happened. So I get into the UK and have people follow me like the whole fucking day, not only on the way to the embassy from the embassy back and so on. I once spotted one of those persons like sitting on the street level on the other side, watching the whole time I was in an office ground level. And because I had a bit of experience with that in continental Europe, like in Germany, if you realize these guys go after you and you put your camera on your table or start even to make photos of them, they're very quickly gone because they don't want to be relocated, they don't like to be exposed and so on. But the British behaved in this time, this scenario completely different. So he was getting very aggressively body language, spotting, looking back and so on. So that was a little weird. That same day at three o'clock in the night when my friends drove me to the place where I was sleeping on the one-way street, there was still a car following us even in the one-way street. So actually he had to turn back and so on. That was no more coverage of violence. That was already the edge to intimidation. And then over the next months, I started to have new favorites, but not only in England or in other countries, that I would see homeless looking like people on the street level, sitting there begging or leaning to some buildings. And at some point, I had to realize that the cheap plastic bags they were wearing were just a cover for cameras that were actually with Zoom and digital getting into my direction. So that felt a little... And so the idea of this measurement, if you look at their manuals, which you'll find somehow in the internet, is that the difference between discovered surveillance, which is to find out where you are and the open surveillance, which I call intimidation surveillance, the idea is you create in the prison in this case, for me, a state of distress. So you're constantly having this looking around and you obviously have the idea something is going on and they let you know. They want to let you know. And that's a little weird. So in April 2018, actually in March 2018, I brought one of my cryptophones, in this case a desk phone based on a Zip phone called Snom870, back to our workshop here to repair. The display had been exposed to heat and got a little melted. It's not so super high quality LCD display. So I wanted just to display, to replace the display. So I opened the thing and I found actually a box and that box turned out to be very sophisticated thing. So there was a battery pack, there was a shielded thing. There was behind that shielded thing was a module that had been soldered into. It was based on an FPGA, some hardware crypto elements, 16 gigabytes of flesh ROM. It was completely passive, so I wouldn't have found it in any sweep because it just recorded whatever I talked on that encrypted phone and would be triggered by high frequency to send out the recorded stuff encrypted in a burst signal. You see here in the URL to find more pictures online to give you an idea. This is the thing I found. This is how it looked like at the beginning. It is the phone itself has two PCBs, one for the keyboard and one for the connector, the processing and so on. This was the modified version of the keyboard PCB with this battery pack in blue, the shielded module. And here you get an idea of what was in there. That's pretty high tech. We did of course look into what exactly do we have here, when were these chips produced, what does it do and so on. But it's pretty obviously that this is like for those who have read the smell documents intensely. It's what's called special collection service. Inside there there's a group called Targeted Access Operations Tau. And they work together with a thing called PAG, the physical access group. There's someone and that was the thing, it was not only built into this phone. That phone had been of course in a locked room and I had to ask myself, okay, what happened here? Here you see how they grabbed the audio with a glued mini PCB from the other main controller into their little technology. And here you see a comparison picture to the right. You see the original PCB keyboard which has almost nothing on it. And to the left you see the modified version. Friend of mine made a bit of a diagram and yeah, I'll just leave it for you. You can review it later. I'll upload these PDF slides of course. So here's some aspects of what was going through my head over the time. Of course the first question was how long was this there? No idea. It could be years. The components we identified were produced around or no earlier than April 2013. So if you remember Snowden came with this revelation mid of 2013 roughly. And I've been working for the Spiegel with others on the Snowden documents next to that phone and coordinating a lot of it in the year 2013. So in theory it could be even related to that. Who knows? The dimensions suggest clearly a non-metric origin. The antenna would work in the range of 800 MHz. So you find here a mentioning of a PDF that tells you something about these groups. But I did talk to some people who do professional sweeping, meaning looking for audio box and so on in devices and rooms. And they told me from the experience of the Cold War until today the operation to bring something into a room, into a device, that's quite an effort. Because you need to secure, you need to ensure you don't get caught and so on. And so what you normally do is because technology can fail is you do not install one box, you install at least two. In the Cold War people told me from the elder generation that the relationship was one to eight, so that because technology failed a lot back then. However that made me of course think, okay, what else could there be? What can I do to find the rest and so on? Does it even make sense? Can I secure all the rooms that I use to work here and there in such a way that I could be sure and of course I can't be. So this was a first hard confrontation with my own cognitive dissonance because all that you know surveillance theater where I thought, okay, Julian has some trouble and I think I have something to do with it. And when I travel to England, okay, they follow me, you know, you get used to that kind of things. But like something you can have in your hand and that's outside of IT incidents that means that all your encrypted communications have been listened to while that field shitty. So that's what I call a hard confrontation with my own cognitive dissonance. The next thing I want to talk about is very recent. It's about one and a half months old now. When here in Berlin, I went out actually very early in the morning to get some stuff from a grocery in a time of pandemic when no one is in the shop at seven something in the morning. I come back half an hour later and the key to my apartment door does not fit into the cylinder anymore. That felt a bit shitty. It was not a normal cylinder. It was a so-called stealth cylinder. You might want to look at the internet what it is. It's a Swiss company who's doing nice keys that you cannot photograph and copy because it has inner elements with a sophisticated mechanical way of opening. I did however when I bumped into my door and had to, first I called my locksmith's dude or my friend from the lockpicking industry. I could say who had advised me to buy that cylinder. I talked with my lawyer and we agreed it's a good idea to call the police to put it on the other list of things that I had collected until then. I then realized that I had been followed that morning but I didn't take any attention to it because I was just walking in half automatic mode to the grocery and there was a couple talking such a bullshit. They will probably listen to this talk and will remember that dialogue. It was just not making any sense but I was too polite to point it out. They were very closely. It was not about where I was going. It was about that I was not at home. They ensured that in the timeframe that I was there, the other guys could operate and so on. That is an ongoing investigation but I can tell you this is the next incident where like cognitive dissonance and the illusion you want to give yourself. I'm not important in this game. Yeah, these guys follow me here and there and this feels kind of different. This is no more nice. Here's a little bit of a get the idea of the cylinder. You cannot really see the object that was inserted but at the end we didn't get it out for forensic reasons. We had to drill. Police went through the apartment and so on. Yeah, another interesting day you can have. So here's some aspects that I asked myself. So was it even my cylinder that I couldn't open? Maybe they could not lockpick the original stealth cylinder I had. They had to open it in a violent way. They were in the apartment to whatever put another bug in there. But as they couldn't replace it with the original cylinder, as they had destroyed it, like they put another one in and that's why my key wasn't fitting. It's an option. The next option was it maybe a trap to make me replace the broken cylinder with a cheaper one with a more simple one that they could open then afterwards when I was gone. Next option. Or maybe was it not about the door at all? Was it maybe just to freak me out? Of course, it feels not so great if you can't open your own apartment door and so on. And the fourth question was of course, oh gosh, how much time did I spend that day with the police, was drilling open the door, with all that kind of things, it more or less cost me a day. And what maybe happened to my machines, meaning my computers, my other things, maybe where was my attention not in that time frame? Because it could be was a pure distraction thing. We freak him out a little bit and while he's freaking out, we do other things in his office or whatever. I can't rule it out. And then of course, I mean, the police sent me some funny questions of still working on that, like, yeah, should I name Pompeo as a suspect? Not sure. But maybe I should. Discussing it with my lawyer and so on. And also, is it maybe related to the date? This was the third of November. In case I would say it, the third of November is the election day or was the election day in the United States. And there was some accusation with you and had something to do with the election some years ago. So however, the next event, incident number three, has to do with something that happened in between because on Monday, the day before they messed up with my door, I had shipped some documents to Spain, I realized then. That was legal documents that required me and a friend going to the Spanish embassy. We gave a tour of power fraternity and so on because we are also accusing this company UC Global, which I talked about last year, which was the company running the surveillance or the protection surveillance at the beginning on behalf of the Ecuadorians in that embassy. And later turned out to be working for Shadal Ailsen's company or at least having a site arrangement there, which is still subject to an ongoing lawsuit. And we participate in that lawsuit because not only Julian was spied on, everybody was spied on who was visiting him and so on. So I had shipped documents on that Monday, almost six o'clock on the local post office here by DHL Express. I put that documents in a sealed bag, that's like a bag with a serial number and so on. That went together with the describing list was inside the bag into a white envelope that again I sealed with seal tape. Then I gave that to the post office, but they insisted it gets in a DHL Express bag, that's what you get for the 70 euro to be arriving within two days. So yeah, the stuff arrived on Wednesday, but all opened. And the Spanish lawyers freaked completely out. They were very sure that this was a madling. You would see that it was sliced open and so on. Yes, you see this funny duct tape here called Sol. But why would the German customs open a document shipment within Europe? That just not makes a lot of sense. It's still on the way to be checked. In theory they could do that. But also this incident has some aspects. It's a breach of attorney-client privilege. That's why the Spanish lawyers insisted us we bring this to a criminal complaint. They did on their end right when they received it and they made these photos. So was German customs even involved? I was just a duct tape used by some funny people. Why when I emailed all this to my lawyer with the pictures and the one, why did he not receive the email until he realized on Monday that it somehow ended in his trash? He also freaked out. And then I talked with DHL, of course. I made a big fuss there and they were like, no, we cannot tell you on which legal grounds the shipment was open. We cannot tell you who did it. But if you have an inquiry, why don't you send it to the customs? So without giving me even which customs entity it would be or whatever. And again, of course, this is kind of an interesting story, but I have normally other priorities in my life. So I'm asking myself, oh gosh, how many days should I waste here? I was finding out who opened the fucking shipment. But this is again the state of distress. This is again the effort. And it's again a reminder we are after you. We check your things. We don't like you're suing the CIA suspected company and so on and so on. So coming to a bit of a conclusion of this talk, as we also want to have time for questions and so on, I want to talk about three aspects. The one is the elephant in the room and the problem of the missing sock. So at some point, I don't want to say that I have been completely not in the state of distress. And also, I don't know how this affects my sanity and those people surrounding me. So your cognitive systems get kind of otherwise triggered and you start to see these things everywhere. And when then you wash some socks and it turns out there's a sock missing, the other person in my life was like, OK, CIA. However, I did suspect the bad sheets and we found one of the socks in the bad sheet. So when you know the problem is socks get in the drum sometimes hanging, you wash something different than like a bad sheet. And the bad sheet is an excellent place to hide things that have been in the drama, then get into the bad sheet and you just dry it with it and you don't even realize it and so on. So while I'm a complete, for entertainment reasons, also for, you know, you need to relax your brain in such a situation once a while, I'm totally OK to say this year is just once before everything, including the missing socks. But suspect the bad sheet first and realize that, yes, this is a joke and this is escapism and it helps you maybe to stay sane for the little moment. But in the long term, I don't know. So and that's the I don't know part is the other two slides that are coming out. So what should I do? And I should I invite some friends and declare my office here like a laboratory for surveillance. Yeah, it has been before I looked at surveillance technology, but in this case it's surveillance technology looking at me and my friends. So it's slightly different. It's maybe also important to not get into some kind of auto response mode when things happen because I was thinking also, what the fuck? Why are they doing all these things? It costs them money. It costs them effort. Is it to freak me out? Is it that they think that like, like, like I'm seriously in such an evil mode organization that, you know, they will escalate things and I will start to throw bombs at US embassy or I don't know. I have no idea what their idea is. But I would just try to stay like slow motion and think about it. The next aspect is, however, do I infect other people? And now I'm not talking about my paranoia or my situational awareness, as I would call it, which of course, at some point is ongoing and it's no more sometimes. But when I talk with normal people, with other journalists, with people I deal with for normal things and they visit me and we do whatever kind of social things like normal things like having food and afterwards they call me a day later and say, oh, finally my phone started rebooting twice yesterday and these kind of things. So that you think, OK, it's not my paranoia that is infectious. It's actually they obviously want to not only know what kind of people I'm dealing with and look into that technology, they also want to freak them out. So this is not cool. And it also means that the type of ignorance you could normally apply and say, well, ignorance is a blast. Come on, let's have a nice day and forget about all this. That's kind of limited. That's no more an option. And also, while I've been dealing with that type of stress and that type of thing for a while now, and I can say, well, that's how it is. And it doesn't mean that everybody dealing with you can do that. There's people who are seriously freaked out by such a situation and they it creates fear, it creates anger, stress and so on. So that's not cool. So my last slide that ends up with a question to you guys is how to get out of this mess. So option one, I managed to get proper authorities to make the CIA stop acting illegal. OK, I heard the laughing. I know this ridiculous. But it would be so beautiful. Justice prevails. The German authorities, the European ones, pick it up. I finally managed to escalate it to the General Bundesanwaltschaft. I do not have to talk with the German intelligence services, as I'm not sure they would be helpful in this game. And they make the CIA stop acting illegal and against me and the other person surrounding. Beautiful dream, but OK, not very realistic, maybe. Option two, Pompeo realizes Jesus lost WikiLeaks and whatever shall become true will be become true. He reads it in the Bible. Pompeo seems to be, if you look at his Twitter account, reasonable believe in Jesus Christ and all that thing. So he realizes it's all wrongdoing against Julian and WikiLeaks and all the people targeted in that context and stops it. I know, OK, shit happens. Well, come good. If that's realistic, I don't know, you tell me. And the third option, I don't know, maybe you have some ideas. But that's my question to you, the audience. And that's the end of my prepared part of the talk. And with these words, thank you, Andy, for the brilliant talk. In the meantime, I received a message. A third option would be to have a great wine yard. Sorry. I personally... Yes, it's completely right. I consider it actually maybe I should do something with goats, become a farmer, or, you know, yeah, there's these options. But I thought before I give it up and find my way on the countryside, I outsource the problem to the community and see what they think. In the meantime, I think there is plenty of time for a great white wine. But to our questions. We have indeed plenty of questions. The first question would be, how would you compare the surveillance of the CIA or other to the surveillance of the GDR? So for the Deutsche Demokratische Republik? Well, I'm born in Hamburg in West Germany. I lived in East Germany when the government was already falling into pieces. It was technically still there. I'm not the best person to compare it. But I did talk with a person I know who worked for the Foreign Intelligence Services because there was... You know, I simplified here, of course, the incidents a little bit. There was one scene when later I went into my kitchen that day when my door lock got tempered with and I found a blue plastic glove. And I don't have blue plastic gloves. And I asked my locksmith's guy. He was like, no, it's not from me and the police had black ones. So I thought, what the fuck? Maybe the guys have been inside the apartment, which I didn't thought earlier because it was second lock and the police checked and so on. And then I talked to, discussed it with this person I know who's quite friendly man was working in the Foreign Intelligence of that country. And so he was like, you have to look at it from a cost effectiveness point of view. Like that piece of plastic cost you 10 cent, nothing. And it freaks you out three months. So see how much, how cost effective it is. And I mean, that's a good aspect. That's a good point. And so I think that the East German, the guys, the East German intelligence guys, they also, they knew very well the difference and they had both instruments in their program to either do covert surveillance, really like not let you know. And the department thought we let him know and see how he reacts or we let him know because he's ongoing doing things and we want him to, you know, stop it and get intimidated and so on and get scared maybe or his wife gets scared or these kind of things. So I think it is comparable. Cool. Oh, well, not cool. Speaking of covert versus over civilians, sorry, as you now know, does it still bother you emotionally? Well, what bothers me sometimes is, you know, it's also it has a sometimes it's nice to be alone and it's sometimes nice to not think about the CIA guys being in the apartment next door or in my case in the apartment under me or in surrounding environments. But thinking about normal things like playing a puzzle or seeing some funny spy movies. Oh, wait, that's almost relaxing. No, seriously, at some point it sucks a little bit. I can't kind of deal with it. I mean, this 2020 year has of course complicated or has made it almost impossible to travel. So normally I escape my intensity of my work situation with travels, maybe I can do that this year. So I'd feel a little more intense and it annoys a little bit and I would like to get these guys out of my life and do something useful with their life or whatever. Now, the next question, he or she or the person or creature probably missed it. Do you disassemble all your devices on a regular base? No, I usually do just regular and seal them. In this case, the seal had had an issue with a with a heat as well. So I and that was lousy on checking it, I have to say. So yes, that's something. I mean, if you have one office, you can do that. I tend to work on different continents even and that turned out to be a bit of an issue. So yes, you need to have sleeves everywhere and seals and but even then, you know, Pompeo seems to have justified or have given orders to do these things no matter the costs. And my expectation to have like a private or secure encrypted channel so is very limited for a while watching that effort. The encryption of the cryptophone obviously was good. Otherwise, they wouldn't have had the effort to, you know, build something in. But at the end of the day, for me, it has the same impact. It's like, well, yeah, it's a phone. There's a piece of devices in a room. The room has windows. We've seen what they've done with the embassy windows and so on. So it's like, yeah, security. What a nice idea. But it doesn't really exist. Yeah. Do you try giving a few coins to the homeless looking people to do either some reverse intimidation or good deed if they are not CIA? Yeah, that's a I mean, I had this one particular situation where I was waiting for someone on a kind of a shopping street and I just I just thought something is wrong with a guy. But when I saw the camera and son, he also rushed away. So no, I didn't give them the money. The second scenario. No, but it's a good idea. The thing is that what I started to do is to always have a camera with me. That turns out for me to be important, to be able to document things. And also, most of them, except the British, don't like it when they are being photographed. You either they it's very interesting because normal people do realize when they are being photographed, but these guys are either pretending, no, I don't see that you photograph me. You know, they look but a little bit with too much energy away at the moment. Or they are seriously disturbed and go away. So the best solution would be to have the boldest, biggest, largest camera always by hand. Yeah. I mean, I've not been a fan of surveillance technology and for sure not of CCTV for a long part of my life. But I start to like the idea of CCTV at some places in my own environment. I'm sorry to say that. But there's compromises you can make like survival feed, you know, other parts you don't always need to face is if you need to face is there's options. And still analog photography is a great thing. But that's my personal opinion. You maybe you want to you can talk maybe you cannot talk about the use other counter measurements. You want to talk about or you can talk about. No, obviously, don't want to talk about it. But I mean, I've been I was wondering myself how why I had this rather intense things going on. I was wondering, is it the timeframe? Is it me as a person? It might have to do with actually being in this funny scene. Of course, I've learned. I mean, I know lockpicking persons have always had an eye on having good locks based on their advice and understanding how easy it would be otherwise. And using encryption was also not always about like hiding something. It was just good practice of having privacy and operational security. So for me, that was very normal for many years to do that and maybe, you know, compared to other persons that made me more interesting. I don't know. I'll find out one day. But I think it's a good idea for everybody involved to think about these three aspects, physical security, encryption, and also what kind of ways do you have to realize if something is being tampered with? Yeah. And that's not necessarily monitoring. I mean, monitoring can help. But on the other hand side, yeah, with monitoring systems, they can also deal with. Like physical checks, some kind of. Our next question. Do you ask the police at the border if everything is prepared now? You mean at the British border? Probably that's a reference to. I don't travel to the UK anymore. I decided, you know, after they dealt with Julian there, I don't like that place anymore. I never felt so well there. And actually, maybe I forgot to mention that after this kind of treatment at the border started, I also started avoiding sleeping in the UK. So I made day trips sometimes in order to get the last plane out of the country. I was flying to Zurich first because it was a late flight to Zurich and then next morning to Berlin, I felt in Zurich better at a bar or fish-hitty hotel than in London Central City with. Yeah, the special relationship as it's called between the intelligence of the UK and those of the US. I see. Speaking of sleeping or in this case concerning your apartment, the question would be, would some home surveillance system bring some relief, for example? Well, that's like a change in devil with the other dude, right? Yeah. I mean, no, I'm not really a friend of that. But yes, of course, I had to at the end of the day at least check with my door and so on what I can do to detect and record things and so on. But it's not a pleasure. It's not like, I don't know. I mean, yes, you end up doing that kind of shit, but that's not how life on planet Earth should be. Yeah. Yeah, it's a kind of a trade-off for what return. And I think that the thing is, I mean, look, I'm a German citizen. What I'm doing is constitutionally protected. I live in the governmental district of Berlin. It's fairly safe here. But I have friends in other places. Other situations in life is completely different there. And that is more what worries me that I'm in a relatively cool position, secure position. That's why I can talk about these things. But I have friends who have a more severe situation and they are not sure they should talk about it to not escalate things. And that's a very tricky choice to make, maybe. Yes, indeed. That brings us to another question. And I think this is a perfect point to mention that. Can we do, what can we do to support you in getting out of this mess? And what can we do in general for this? Well, I really appreciate the question. I don't have a good answer, but I think, yes, I would like to discuss more with people about what can be done. I mean, for the moment, I'm dealing with police, lawyers, the speaker guys I'm working with, they also find some ways maybe to address it. But it seems like at least if it comes to Julian's situation, things are badly escalated and it's all a bit interrelated. So I don't have a good answer at this moment, but I think it's a good idea to discuss it more and so maybe identifying other people who are in some kind of a risk situation because these things happen. And as I maybe hopefully was able to show it's not that difficult to get into such a mess, it happens. Yeah, and speaking of discussing, you mentioned earlier, there is a big blue button to discuss. Any further, you will find it in the 2D area in the 2D world in the whistleblower wiki. Is it right? Yes, in the tent, actually, I was told. And the tent is the URL to the big blue button or somehow it's interlinked there. So again, please go out, explore the 2D world and of course the whistleblower tent. We still have some minutes left. How do you do mentally? Did you use any methods to keep your head clean or clear and freak out? Yeah, that's a good question. I drink too much vodka, but I try to keep it with good quality. Let me say it like this, the real trouble is maybe that while in this scene here, people have a rough understanding of this type of things already. I also liked to be around with people who have nothing to do with IT, with security, with all these kind of things, so-called normal people. It's refreshing to be with them, but their ability to understand this mess is a little bit limited. So I think others judge better how I'm doing mentally. I'm trying to keep my head up and finding a good way out, but if anyone has a good idea, I'm really all for listening and see what's possible. In this case, I can come back to the win yard. It's pretty relaxing to have work in the late autumn. Even during a pandemic? Okay. You just have to find the way there. It's outside and there's a lot of distance between the people. I think this will work. So the last question, red or white wine? Red wine. Red. Yeah, definitely. And thank you for all this. Just to point out, please, we also have to work and to get in Julian out there and others who are in this mess who can't even talk about it. I really appreciate the option here to talk to you guys, but it's also about the others. And let us get Julian out here, please. With this great word, Andy, thanks for your time. Thanks for being here at the remote chaos. As mentioned, we still have the opportunity to ask you some questions in the whistleblower tent. And with this, have a nice evening. Try to relax and see you latest. Thanks. Thanks. Thanks. Thanks. Next time. All of you. You
|
In this talk, I aim to report and show a collection of observations, physical, visual and other evidence of the last years incidents that strongly indicate a context of the US Central Intelligence Agency and/or potentially other entities of the US Government actions against Wikileaks and surrounding persons and organisations. While the area of technical surveillance, SIGINT/COMINT and related Organizations and Methods have been more or less well understood in the hacker scene, the tactics and methods experienced and discussed in this talk are of a different type: For the moment, I would call it "initimidation surveillance" as it lacks the aspect of "covert" type of actions. On the last Chaos Communication Congress, I have analysed the technical aspects of the surveillance in and surrounding the ecuadorian embassy where Julian Assange stayed; this talk shows what happened to other people - friends of Assange, supporters of Wikileaks etc - not only in England, but also in other countries / other parts of the world. The idea is to not only show the scope of activities but also to contribute to a better understanding of these tactics, that might be applied also in completely different political environments where governments act in extralegal ways against activities they dislike, although they that are not a crime or easily criminalized.
|
10.5446/52101 (DOI)
|
Alright, so again, let's introduce the next talk, Accessible Inputs for Readers, Coders and Hackers, the talk by David Williams King about custom, well not off the shelf, but custom accessibility solutions. He will give you some demonstrations and that includes his own custom made voice input and eyelid blink system. Here is David Williams King. Thank you for the introduction. Let's go ahead and get started. So yeah, I'm talking about accessibility, particularly accessible input for readers, coders and hackers. So what do I mean by accessibility? I mean people that have physical or motor impairments, this could be due to repetitive strain injury, carpal tunnel, all kinds of medical conditions. If you have this type of thing, you probably can't use a normal computer keyboard, computer mouse or even a phone touch screen. However, technology does allow users to interact with these devices just using different forms of input. And it's really valuable to these people because being able to interact with a device provides some agency, they can do things on their own and it provides a means of communication with the outside world. So it's an important problem to look at. And it's one I care about a lot. Let's talk a bit about me for a moment. I'm a system security person. I did a PhD in cybersecurity at Columbia. If you're interested in low level software defenses, you can look that up. And I'm currently the CTO at a startup called Alpha Secure. I started developing medical issues in around 2014. And as a result of that, in an ongoing fashion, I can only type a few thousand keystrokes per day. Roughly 15,000 is my maximum. That sounds like a lot, but imagine you're typing at 100 words per minute. That's 500 characters per minute, which means it takes you 30 minutes to hit 15,000 characters. So essentially, I can work like the equivalent of a fast programmer for half an hour. And then after that, I would be unable to use my hands for anything, including like, you know, preparing food for myself or opening and closing doors and so on. So I have to be very careful about my hand use and actually have a little program that you can see on the slide there that measures the keystrokes for me so I can tell when I'm going over. So what do I do? Well, I do a lot of pair programming for sure. I log into the same machine as other people and we work together. I'm also a very heavy user of speech recognition. And I give a talk at that about voice coding with speech recognition at the Hope 11 conference. So you can go check that out if you're interested. So when I talk about accessible input, I mean different ways that a human can provide input to a computer. So ergonomic keyboards are a simple one. Speech recognition, eye tracking or gaze tracking. So you can see where you're looking or where you're pointing your head and maybe use that to replace a mouse. That's head gestures, I suppose. And there's always this distinction between bespoke, like custom input mechanisms and somewhat mainstream ones. So I'll give you some examples. You've probably heard of Stephen Hawking. He's a very famous professor and he was actually a bit of an extreme case. He was diagnosed with ALS when he was 21. So his physical abilities degraded over the years because he lived for many decades after that. And he went through many communication mechanisms. Initially, his speech changed so that it was only intelligible to his family and close friends, but he was still able to speak. And then after that, he would work with a human interpreter and raise his eyebrows to pick various letters. And then, keep in mind, this is like the 60s or 70s, right? So computers were not really where they are today. Later, he would operate a switch with one hand, just like on off, on off, kind of Morse code and select from a bank of words. And that was around 15 words per minute. Eventually, he was unable to move his hand. So a team of engineers from Intel worked with him and they figured out they were trying to do like brain scans and all kinds of stuff. But again, this was like in the 80s. So there was not, not too much they could do. So they basically just created some custom software to detect muscle movements in his cheek. And he used that with predictive, predictive words the same way that a phone, smartphone keyboard will predict like which word you want to say next. Stephen Hawking used something similar to that except instead of swiping on a phone, he was moving his, his cheek muscles. So that's obviously a sequence of like highly customized input mechanisms for, for someone and very, very specialized for that person. I also want to talk about someone else named Professor Song Muk Lee, whom I've met that was me when I had more of a beard than I do now. He, he's a professor at Seoul National University in South Korea. And he, he's sometimes called like the Korean Stephen Hawking because he's a big advocate for people with disabilities and whatnot. Anyway, what he uses is you can see a little orange device near his mouth there. He, it's called a sip and puff mouse. So he can blow into it and suck air through it and also move it around. And that acts as a mouse cursor on the Android device in front of him. It will move the cursor around and click when he, when he blows air and so on. So that combined with speech recognition, lets him use mainstream Android hardware. He still has access to, you know, email apps and like web browsers and like maps and everything that comes on a normal Android device. So he's way more capable than Stephen Hawking was because Stephen Hawking could communicate, but just to a person at a very slow rate, right? Part of it is due to the nature of his injury, but it's also a testament to how far the technology has improved. So let's talk a little bit about what makes good accessibility. I think performance is very important, right? You want high accuracy, you don't want typos, low latency, I don't want to speak and then five seconds later have words appear. It's too, it's too long, especially if you have to make corrections, right? And you want high throughput, which we already talked about. Oh, I forgot to mention Stephen Hawking had like, you know, 15 words per minute. A normal person speaking is 150. So that's a big difference. The higher throughput you can get the better. And for input accessibility, I think, and this is not scientific, this is just what I've learned from using it myself and observing many of these systems. I think it's important to get completeness, consistency and customization. For completeness, I mean, can I do any action? So Stephen or Professor Song-Luk Lee, his orange mouth input device, the Sip and Puff, is quite powerful, but it doesn't let him do every action. For example, for some reason, when he gets an incoming call, the input doesn't work. So he has to call over a person physically to like tap the accept call button or the reject call button, which is really annoying, right? If you don't have completeness, you can't be fully independent. Consistency, very important as well. The same way we develop motor memory for a muscle memory for a keyboard, you develop memory for any types of patterns that you do. But if the thing you say or the thing you do keeps changing in order to do the same action, that's not good. And finally, customization. So the learning curve for beginners is important for any accessibility device, but designing for expert use is almost more important because anyone who uses an accessibility interface becomes an expert at it. The example I like to give is screen readers, like a blind person using a screen reader on a phone, they will crank up the speed at which the speech is being produced. And I actually met someone who made his speech 16 times faster than normal human speech. I could not understand it at all. But he could understand it perfectly. And that's just because he used it so much that he's become an expert at its use. Let's analyze ergonomic keyboards just for a moment because it's fun. They are kind of like a normal keyboard. They'll have a slow pace when you're starting to learn them. But once you're good at it, you have very good accuracy, like instantaneous low latency, right? You press the key, the computer receives it immediately. And very high throughput, as high as you are on a regular keyboard. So they're actually fantastic accessibility devices, right? They're completely compatible with original keyboards. And if all you need is an ergonomic keyboard, then you're in luck because it's a very good accessibility device. I'm going to talk about two things, computers, but also Android devices. So let's start with Android devices. Yes, the built-in voice recognition in Android is really incredible. So even though the microphones on the devices aren't great, Google has just collected so much data from so many different sources that they've built, like, better than human accuracy for their voice recognition. The voice accessibility interface is kind of so-so. We'll talk about that in a bit. That's the interface where you can control the Android device entirely by voice. For other input mechanisms, you could use like a Sip and Puff device, or you could use physical styluses. That's something that I do a lot actually, because for me, my fingers get sore. And if I can hold a stylus in my hand and kind of not use my fingers, then that's, you know, very effective. So, and the Elacom styluses from a Japanese company are the lightest I've found, and they don't require a lot of force. So the ones at the top there are, they're like 12 grams, and the one at the bottom is 4.7 grams. And you've required almost no force to use them, so very nice. On the left there, you can see the Android speech recognition is built into the keyboard now, right? You can just press that and start speaking in sports different languages, and it's very accurate. It's very nice. And actually, when I was working at Google for a bit, I talked to the speech recognition team, and I was like, why are you doing on-server speech recognition? You should do it on the devices. But of course, Android devices are, they're all very different, and many of them are not very powerful, so they were having trouble getting satisfactory speech recognition on the device. So for a long time, there's some server latency, server lag, right? You do speech recognition and you wait a bit. And then sometime this year, I just was using speech recognition, and it became so much faster. I was extremely excited, and I looked into it, and yeah, they just switched on my device at least, they switched on the on-device speech recognition model. And so now it's incredibly fast and also incredibly accurate. I'm a huge fan of it. On the right-hand side, we can actually see the voice access interface. So this is meant to allow you to use a phone entirely by voice. Again, while I was at Google, I tried the beta version before it was publicly released, and I was like, this is pretty bad. Mostly because it lacked completeness. There would be things on the screen that would not be selected. So here we see show labels, and then I can say like four, five, six, whatever, to tap on that thing. But as you can see at the bottom there, there's like a Twitter web app link, and there's no number on it. So if I want to click on that, I'm out of luck. And this is actually a problem in the design of the accessibility interface. It doesn't expose the full DOM. It exposes only a subset of it, and so an accessibility mechanism can't ever see those other things. And furthermore, the way the Google speech recognition works, they have to reestablish a new connection every 30 seconds. And if you're in the middle of speaking, it will just throw away whatever you were saying because it just decided it had to reconnect, which is really unfortunate. They later released that publicly, and then sometime this year they did an update, which is pretty nice. It now has like a mouse grid, which solves a lot of the completeness problems. You can use a grid to narrow down somewhere on the screen and then tap there. But the server issues and the expert use is still not good. Like, okay, if I want to do something with a mouse grid, I have to say mouse grid on, six, five, mouse grid off. And I can't combine those together. So there's a lot of latency. And it's not really that fun to use, but better than nothing, absolutely. I just want to really briefly show you as well that this same feature of like being able to select links on a screen is available on desktops. This is a plugin for Chrome called Vimium. And it's very powerful because you can then combine this with keyboards or other input mechanisms. And this one is complete. It uses the entire DOM and anything you can click on will be highlighted. So very nice. I just want to give a quick example of me using some of these systems. So I've been trying to learn Japanese. And there's a couple of highly regarded websites for this, but they're not consistent when I use the browser show labels. Like, you know, the thing to press next page or something like that, or like, you know, I give up or whatever is, it keeps changing. So the letters that are being used keep changing. And that's because of the dynamic way that they're generating the HTML. So not really very useful. What I do instead is I use a program called Anki. And that has very simple shortcuts in his desktop app, one, two, three, four. So it's nice to use and consistent. And it syncs with an Android app. And then I can use my stylus on the Android device. So it works pretty well. But even so, you know, as you can see from the chart in the bottom there, there are many days when I can't use this, even though I would like to, because I've overused my hands or overused my voice. When I'm using voice recognition all day, every day, I do tend to lose my voice. And as you can see from the graph, sometimes I lose it for like a week or two at a time. So same thing with any accessibility interface, you know, you got to use many different techniques. And it's always, it's never perfect. It's just the best you can do at that moment. Something else I like to do is read books. I read a lot of books. And I love eBook readers, you know, the dedicated eInk displays, you can read them in sunlight, they last forever battery-wise. Unfortunately, it's hard to add like other input mechanisms to them. They don't have microphones or other sensors. And you can't really install custom software on them. But for Android-based devices, and they're also like eBook reading apps for Android devices, they have everything. You can install custom software and they have microphones and many other sensors. So I made two apps that allow you to read eBooks with an eBook reader. The first one is Voice Next Page. It's based on one of my speech recognition engine called Silvius. And it does do server-based recognition. So you have to capture all the audio, use 300 kilobits a second to send it to the server and recognize things like Next Page, Previous Page. However, it doesn't cut out every 30 seconds. It keeps going. So that's one win for it, I guess. And it is published in the Play Store. Huge thanks to Sarah Leventhal who did a lot of the implementation, very complicated to make an accessibility app on Android. But we persevered and works quite nicely. So I'm going to actually show you an example of Voice Next Page. This is a... Over here, this is my phone on the left-hand side. Just captured so that you guys can see it. So here's the Voice Next Page. And basically, the connection is green. I can do the servers up and running and so on. I just press Start and then I'll switch to an Android reading app and say Next Page, Previous Page. I won't speak otherwise because it will chapel everything I'm saying. Next Page, Next Page, Previous Page. Center, Center, Foreground. Stop listening. So that's a demo of the Voice Next Page. And it's extremely helpful. I built it a couple of years ago along with Sarah. And I use it a lot. So yeah, you can go ahead and download it if you guys want to try it out. And the other one is called Blink Next Page. So the idea for this... I got this idea from a research paper this year that was studying islet gestures. I didn't use any of their code, but it's a great idea. So the way this works is you detect blinks by using the Android camera and then you can trigger an action like turning pages in an eBook reader. This actually doesn't need any networking. It's able to use the on device face recognition models from Google. And it is still under development, so it's not on the play story yet. But it is working. And please contact me if you want to try it. So just give me one moment to set that demo up here. And so I'm going to use... The main problem with this... The main problem with this current implementation is that it uses two devices. So that was easier to implement. And I use two devices anyway, but obviously I want a one device version if I'm actually going to use it for anything. So here's how this works. This device I point at me at my eyes. The other device I put wherever it's convenient to read. Oops, sorry. And if I blink my eyes, the phone will buzz once it detects that I blink my eyes and it will turn the page automatically on the other Android device. Now I have to blink both my eyes for half a second. If I want to go backwards, I can blink just my left eye. And if I want to go forwards, quickly, I can blink my right eye and hold it. Anyway, it does have some false positives. That's why I like you can go backwards. In case it detects that you've accidentally flipped the page. And lighting is also very important. Like if I have a light behind me, then this is not going to be able to identify whether my eyes are open or closed properly. So it has some limitations, but very simple to use. So I'm a big fan. Okay, so that's enough about Android devices. Let's talk very briefly about desktop computers. So if you're going to use a desktop computer, of course, try using that show labels plugin in a browser for native apps, you can try Dragon Naturally Speaking, which is fine if you're just using basic things. But if you're trying to do complicated things, you should definitely use a voice coding system. You could also consider using eye tracking to replace a mouse. I personally, I don't use that. I find it hurts my eyes, but I do use a track ball with very little force and a walk on tablet. Some people will even scroll up and down by humming, for example, but I don't have that set up. There's a bunch of nice talks out there on voice coding. The top left is Tavis Reds talk from many years ago that got many of us interested. Emily Shia gave a talk there about like best practices for voice coding. And then I gave a talk a couple years ago at the Hope 11 conference, which you can also check out. It's mostly out of date by now, but it's still interesting. So there are a lot of voice coding systems. The sort of grandfather of the mall is Dragonfly. It's become a grammar standard. Caster is if you're willing to memorize lots of unusual words, you can become much better, much faster than I currently am at voice coding. A Nia is how you originally used Dragon to work on a Linux machine, for example, because Dragon only runs on Windows. Talon is a closed source program, which is, but it's very, very powerful, has a big user base, especially for Mac OS. There are ports now. And Talon used to use Dragon, but it's now using a speech system from Facebook. Sylveus is the system that I created. The models are not very accurate, but it's a nice architecture where there's client server, so it makes it easy to build things like the voice next page. So the voice next page was using Sylveus. And then the most recent one I think on this list is Calde Active Grammar, which is extremely powerful and extremely customizable, and it's also open source. It works on all platforms, so I really highly recommend that. So let's talk a bit more about Calde Active Grammar. But first, for voice coding, I've already mentioned you have to be careful how you use your voice, right? Breathe through your belly. Don't tighten your muscles and breathe from your chest. Try to speak normally. And I'm not particularly good at this. Like, you'll hear me when I'm speaking commands. My inflection changes, so I do tend to overuse my voice. But yeah, I just have to be conscious of that. The microphone hardware does matter. I do recommend like a blue Yeti on a microphone arm that you can pull and put close to your face like this. I'll use this one for my speaking demo. And yeah, and the other thing is your grammar is fully customizable. So if you keep saying a word and the system doesn't recognize it, just change it to another word. And it's complete in the sense you can type any key on the keyboard. And the most important thing for expert use or customizability is that you can do chaining. So with a voice coding system, you can say multiple commands at once. And it's a huge time savings. You'll see what I mean when I give a quick demo. When I do voice coding, I'm a very heavy Vim and Tmux user. You know, there have been, I've worked with many people before, so I have some cheat sheet information there. So if you're interested, you can go check that out. But yeah, let's just do a quick demo of voice coding here. Turn this mic on. Desk left to control delta, open new terminal, Charlie, Delta space slash Tango mic papa enter command Vim, hotel, hotel point, Charlie, papa, papa, enter. India hash word include space lango. India Oscar word stream wrangle, enter, enter. Indian noitango space word main. Nope, my car to Indian noi. Landrun space lace enter, enter race up tab. Word print Fox scratch nope. Code standard Charlie Oscar uniform Tango space lango lango space quote sentence Hello voice coding bang. Scratch six Delta India, noi golf bang back slash noi quote. semi colon act, Sky Fox mic Romeo noi Oscar word return space number zero semi colon act. Vim save and quit. Golf plus plus space hotel hotel tab minus Oscar space hotel hotel. Enter point slash hotel hotel enter. Desk right to. So that's just a quick example of voice coding. You can use it to write any programming language. You can use it to control anything on your desktop. It's very powerful. It has a bit of a learning curve, but it's very powerful. So the creator of Calde active grammar is also named David. I'm named David, but just a coincidence. And he says of Calde active grammar that I haven't typed with a keyboard in many years and Calde active grammar is bootstrapped in that I have been developing it entirely using the previous versions of it. So David has a medical condition that means he has very low dexterity. So it's hard for him to use a keyboard. And yeah, he basically got Calde active grammar working through the skin of his teeth or something and then continues to develop it using it. And yeah, I'm a huge fan of the project. I haven't contributed much, but I did give some of the harbor resources like GPU and CPU compute compute resources to allow training to happen. But I would also like to show you a video of David using Calde active grammar just so you can see it as well. So the other thing about David is that he has he has a speech impediment or a speech, I don't know, an accent or whatever. So it's difficult for a normal speech recognition system to understand him. And you might have trouble understanding him here, but you can see in the lower right what the speech system understands that he's saying. Oh, I realized that I do need to switch something in OBS so that you guys can hear it. Sorry. There we go. Tim. Number one. Similar comments space. Number one, one, one. Content line. Content, script, enter. Text. Timer assignment. Aposite. Content, enter two times. And then, past rent space. And then, Anyway, you get the idea. And hopefully you guys are able to hear that. If not, you can also find this on the website that I'm going to show you at the end. Oh, one other thing I want to show you about this is David has actually set up this humming to scroll, which I think is pretty cool. Of course, I got and turned off the OBS there, but he's just going, and it's understanding that and scrolling down. So something that I'm able to do with my track ball, but that he's using his voice for. So pretty cool. So I'm almost done here in summary. Good input accessibility means you need completeness, consistency, and customization. You need to be able to do any action that you could do with the other input mechanisms. And doing the same input should have the same action. And remember, your users will become experts. So the system needs to be designed for that. For ebook reading. Yes, we're trying. I'm trying to allow anyone to read, even if they're experiencing some severe physical or motor impairment, because I think that gives you a lot of power to be able to turn the pages and read your favorite books. And for speech recognition, yeah, Android speech recognition is very good. Sylvia's accuracy is not so good, but it's easy to use quickly for experimentation and to make other types of things like voice next page. And please do check out Caldea Active Grammar if you have some serious need for voice recognition. Lastly, I put all of this onto a website, voxhub.io. So you can see voice next page, link next page, Caldea Active Grammar, and so on, just instructions for how to use it and how to set it up. So please do check that out. And tons of acknowledgments, lots of people that have helped me along the way, but I want to especially call out Professor Sang-Muk Lee, who actually invited me to Korea a couple of times to give talks, a big inspiration. And of course, David Zuro has actually been able to bootstrap into a fully voice coding environment. So that's all I have for today. Thank you very much. Right. I suppose I'm back on the air. So let me see. I want to remind everyone before we go into the Q&A that you can ask your questions for this talk on IRC. The link is under the video, or you can use Twitter, or the FETIverse with the hashtag RC32. Again, I'll hold it up here, RC number three, TWO. And wow, thanks for talking, David. That was really interesting. Thanks for talking, David. I think we have a couple of questions from the Signal Angels. Before that, I just want to say I recently spent some time playing with the voiceover system in iOS, and that can now actually tell you what is on a photo, which is kind of amazing. Oh, by the way, I can't hear you here on the mobile. Yeah, sorry. I wasn't saying anything. Yeah, no, it's so I focus mostly on input accessibility, right, which is like, how do you get data to the computer? But there's been huge improvements in the other way around as well, right, the computer? Yeah, so we have about, let's see, five, six minutes left, at least for Q&A. We have a question by Toby plus plus, he asks, your next page application looks cool. Do you have statistics of how many people use it or found it on the app store? Not very many. The voice next page was advertised only so far as a little academic poster. So I've gotten a few people to use it. But I run eight concurrent workers. And we've never hit more than that. So not super, not super popular, but I do hope that some people will see it because of this talk and go and check it out. Cool. Next question, how error prone are the speech recognition systems at all? E.g. Can you do coding while doing workouts? So one thing about speech recognition is very sensitive to the microphone. So when you're doing it, you can't do it. Any mistakes, right? That's the thing about having low latency, you just say something, and you watch it, and you make sure that it was what you wanted to say. I don't know exactly how many words per second, words per minute, I can say with voice coding, but I can say it much faster than regular speech. So I would say at least like 200, maybe 300 words per minute. So it's actually a very high bandwidth. Question from Pepe, JN, Divos. Any advice for software authors to make their stuff more accessible? There are good web accessibility guidelines. So if you're just making a website or something, I would definitely follow those. They tend to be focused more on people that are blind because that is, you know, it's more of an obvious fail. Like they just can't interact at all with your website. But things like, you know, if Duolingo, for example, had used the same accessibility access tag on their like next button, then they would always be the same letter for me. And I wouldn't have to be like, Fox, Charlie, Fox, Delta, Fox, something changes all the time. So I think consistency is very important. And integrating with any existing accessibility API is also very important, web APIs, Android APIs, and so on. Because, you know, we can't make every program out there like voice compatible, we just have to meet in the middle where they interact at the keyboard layer or the all American has a question. I wonder if these systems use similar approaches like stenography with mnemonics, or if there's any projects working having that in mind? A very good question. So the first thing everyone uses is the NATO phonetic alphabet to spell letters, for example, alpha Bravo, Charlie. Some people then will substitute letters for things that are too long, like November, I use no way. Sometimes the speech system doesn't understand you. Whenever I said alpha, Dragon was like, Oh, you're saying offer. So I had changed it. It's arch for me arch bravchar. So and also most of these grammars are in a common grammar format. They are written in Python, and they're compatible with dragonfly. So you can grab a grammar for, I don't know, for a Nia and get it to work with Caldeactive grammar with very little effort. I actually have a grammar that works on both a Nia and Caldeactive grammar. And that's what I use. So there's a bit of lingua franca, I guess you can kind of guess what other people are using. But at the same time, there's a lot of customization, you know, because people change words, they add their own commands, they change words based on what the speech system understands. LEB asks, is there an online community you can propose for accessibility technologies? There's a there's an amazing forum for anything related to voice coding. All the developers of new voice coding software are there. Sorry, I just need to drink. So it's a really fantastic resource. I do link to it from voxhub.io. I believe is at the bottom of the Caldeactive grammar page. So you can definitely check that out. For general accessibility, I don't know, I could recommend the accessibility mailing list at Google, but that's only if you work at Google. Other than that. Yeah, I think it depends on your community, right? I think if you're looking for web accessibility, you could go for some Mozilla mailing lists and so on. If you're looking for desktop accessibility, then maybe you could go find some stuff about the Windows speech API and the Windows accessibility. One last question from Joe Nielsen. Could there be legal issues if you make an ebook into audio? I'm not sure what that refers to. Yeah, so if you are like doing, if you're using a screen reader and you like, you know, try to get it to read out the contents of a of an ebook, right? So most of the time there are fair use exceptions for copyright law, even in the US. And making a copy of yourself for personal purposes so that you can access it is usually considered fair use. If you were trying to commercialize it or make money off of that or like, I don't know, you're a famous streamer and all you do is highlight text and have it read it out, then maybe, but I would say that that definitely falls under fair use. All right, so I guess that's it for the talk. I think we're hitting the timing mark really well. Thank you so much, David, for that. That was really, really interesting. I learned a lot. And thanks everyone for watching and stay on. I think there might be some news coming up. Thanks, and everyone.
|
When people develop carpal tunnel or various medical conditions, it can be difficult to use mainstream input mechanisms like keyboards, mice, and phone touchscreens. Ideally, accessible input mechanisms can be added to mainstream computers and phones. I will give two example demos. The first is using voice or eyelid blinks to control an ebook reader on a standard Android phone. The second is using speech recognition to interact with a Linux desktop, even to perform complicated tasks such as programming. When people develop carpal tunnel or various medical conditions, it can be difficult to use mainstream input mechanisms like keyboards, mice, and phone touchscreens. Such users might have to rely on speech recognition, eye tracking, head gestures, etc. Technology allows devices to understand these mechanisms as user input, and hence provide agency and a communication channel to the rest of the world. Every person's case is a bit different. Sometimes, a fully custom system has to be designed, such as for Stephen Hawking. A team from Intel worked through many prototypes with him, saying: "The design [hinged] on Stephen. We had to point a laser to study one individual." With custom hardware and custom software to detect muscle movements in his cheek, he still could only communicate a handful of words per minute. In less severe situations, the goal is often to add accessible input mechanisms to mainstream computers and phones. Similarly, a blind person will adapt to using a normal smartphone, despite not being able to see the screen. It is not economical to design myriad variants of hardware to handle many different users who all have slightly different needs. Adapting mainstream devices allows a wide range of existing software to be used, from email clients to Google maps, without reinventing everything. In my own case, I have a medical condition and cannot use my hands more than a certain threshold in a day. In order to keep reading books, I designed a speech recognition system and also an eyelid blink system to control an ebook reader app. As a computer programmer, I used to make heavy use of a keyboard to write code. With modern speech recognition, it is possible to design a spoken language that allows precise control over a computer. I will demo an open source voice coding system, and describe how it can be adapted to do any task on a desktop computer.
|
10.5446/52102 (DOI)
|
Welcome with me with a big round of applause in your living room or wherever you are. Der Joram. Der Joram is a science communicator. He got his university education and his first scientific experience at Max Planck Institute. And he will give you now a crash course for beginners to have the best insight into the scientific method and to distinguish science from rubbish. Der Joram. Hi. Nice to have you here. My name is Joram Schwarzmann and I'm a plant biologist. And today I want to talk about science. I have worked in research for many years, first during my diploma thesis and then during my doctoral research. I work both in universities and at the Max Planck Institute, so I got pretty good insights into the way these structures work. After my PhD, I left the research career to instead talk about science, which is also what I'm about to do today. I am working now in science communication, both as a job and in my spare time when I write about molecular plant research online. Today I will only mention plant science a tiny bit because the topic is a different one. Today though, we are talking about science literacy. So basically how does the scientific system work? How do you read scientific information? And which information can you trust? Science. It's kind of a big topic. Before we start, it's time for some disclaimers. I am a plant biologist. I know stuff about STEM research, that is science, technology, engineering and mathematics, but there is so much more other science out there. Social science and humanities share many core concepts with natural sciences, but have also many approaches that are unique to them. I don't know a lot about the way these work, so please forgive me if I stick close to what I know, which is STEM research. Talking about science is also much less precise than doing these science. For pretty much everything that I'll bring up today, there is an example where it is completely different. So if in your country, field of research or experience something is different, we are probably both right about whatever we are talking. With that out of the way, let's look at the things that make science science. There are three parts of science that are connected. The first one is the scientific system. This is the way science is done. Next up we have people who do the science. The scientific term for them is researchers. We want to look at how you become a researcher, how researchers introduce biases and how they pick their volcanic layer to do evil science. Finally, there are publications. This is the front end of science, the stuff we look at most of the time when we look at science. There are several different kinds and not all of them are equally trustworthy. Let's begin with the scientific system. We just don't do science. We do science systematically. Since the first people try to understand the world around them, we have developed a complex system for science. At the core of that is the scientific method. The scientific method gives us structure and tools to do science. Without it, we end up in the realm of guesswork, anecdotes and false conclusions. Here are some of my favorite things that were believed before the scientific method became standard. Gentlemen could not transmit disease. Mice are created from grain and cloth. Blood is exclusively produced by the liver. Heart shaped plants are good for the heart. But thanks to the scientific method, we have a system that allows us to make confident judgment on our observations. Let's use an example. This year aged me significantly and so as a newly formed old person, I have pansies on my balcony. I have blue ones and yellow ones and in summer I can see bees buzz around the flowers. I have a feeling though that they like the yellow ones better. That right there is an observation. I now think to myself, I wonder if they prefer the yellow flowers over the blue ones based on the color. This is my hypothesis. The point of a hypothesis is to test it so I can accept or reject it later. So I come up with a test. I count all bees that land on yellow flowers and on blue flowers within a weekend. That is my experiment. So I sit there all weekend with one of these clicky things in each hand and count the bees on the flowers. Every time a bee lands on a flower, I click. Click, click, click, click, click. It's the most fun I had all summer. In the end, I look at my numbers. These are my results. I saw 64 bees on the yellow flowers and 27 on the blue flowers. Based on my experiment, I conclude that bees prefer yellow pansies over blue ones. I can now return and accept my hypothesis. Bees do prefer yellow flowers over blue ones. Based on that experiment, I made a new observation and can now make a new hypothesis. Do other insects follow the same behavior? And so I sit there again next weekend counting all hoverflies on my pansies. Happy days. The scientists in the audience are probably screaming by now. I am too, but on the inside. My little experiment and the conclusions I did were flawed. First up, I didn't do any controls apart from yellow versus blue. What about time? Do the days or seasons matter? Maybe I picked up the one time period when bees actually do prefer yellow, but on most other days they like blue better. And then I didn't control for position. Maybe the blue ones get less sunlight and are less warm and so a good control would have been to swap the pots around. I also said I wanted to test color. Another good control would have been to put up a cardboard cut out of a flower in blue and yellow and see whether it is the color or maybe another factor that attracts the bees. And then I only counted once. I put the two data points into an online statistical calculator and when I had calculated, it told me I had internet connectivity problems. So I busted on my old textbook about statistics and as it turns out, you need repetitions of your experiment to do statistics and without statistics, you can't be sure of anything. If you want to know whether what you measure is random or truly different between your two conditions, you do a statistical test that tells you with what probability your result could be random. That is called a p-value. You want that number to be low. In biology, we are happy with a chance of 1 in 20, so 5% that the difference we observed between two measurements happened by chance. In high energy particle physics, that chance of seeing a random effect is 1 in 3.5 million or 0.0003%. So without statistics, you can never be sure whether you observe something important or just two numbers that look different. A good way to do science is to do an experiment a couple of times, three at least, and then repeat it with controls again at least three times. With a bigger data set, I could actually make an observation that holds significance. So why do I tell you all of this? You want to know how to understand science, not how to do it yourself. Well, as it turns out, controls and repetitions are also a critical point to check when you read about scientific results. Often enough, cool findings are based on experiments that didn't control for certain things or that are based on very low numbers of repetitions. You have to be careful with conclusions from these experiments as they might be wrong. So when you read about science, look for science that they followed the scientific method, like a clearly stated hypothesis, experiments with proper controls, and enough repetitions to do solid statistics. It seems like an obvious improvement for the scientific system to just do more repetitions. Well, there's a problem with that. Often, experiments require the researchers to break things. Maybe just because you take the things out of their environment and into your lab, maybe because you can only study it when it's broken. And as it turns out, not all things can be broken easily. Let me introduce you to my scale of how easy it is to break the thing you study. All the way to the left, you have things like particle physics. It's easy to break particles. All you need is a big ring and some spare electrons you put in there really, really fast. Once you have these two basic things, you can break millions of particles and measure what happens, so you can calculate really good statistics on them. Then you have other areas of physics. In material science, the only thing that stops you from testing how hard a rock is, is the price of your rock. Again, that makes us quite confident in the material properties of things. Now we enter the realm of biology. Biology is less precise, because living things are not all the same. If you take two bacterial cells of the same species, they might still be slightly different in their genome. But luckily, we can break millions of bacteria and other microbes without running into ethical dilemmas. We even ask researchers to become better at killing microbes. So doing more of the experiment is easier when working with microbes. It gets harder though with bigger and more complex organisms. Want to break plants in a greenhouse or in a field? As long as you have the space, you can break thousands of them for science and no one minds. How about animals like fish and mice and monkeys? There it gets much more complicated very quickly. While we are happy to kill thousands of pigs every day for sausages, we feel much less comfortable doing the same for science. And it's not a bad thing when we try to reduce harm to animals. But while you absolutely can do repetitions and controls in animal testing, you usually are limited by the number of animals you can break for science. And then we come to human biology. If you thought it was hard doing lots of repetitions and controls in animals, try doing that in humans. You can't grow a human on a corn sugar based diet just to see what would happen. You can't grow humans in isolation and you can't breed humans to make more cancer as a control in your cancer experiment. So with anything that involves science and humans, we have to have very clever experiment design to control for all the things that we can't control. The other way to do science on humans, of course, is to be a genetic life form and disc operating system. What this scale tells us is how careful we have to be with conclusions from any of these research areas. We have to apply a much higher skepticism when looking at single studies on human food than when we study how hard a rock is. If I'm interested in stuff on the right end of the spectrum, I'd rather see a couple of studies pointing at the conclusion. Whereas the further I get to the left hand side, the more I trust single studies. That still doesn't mean that there can't be mistakes in particle physics, but I hope you get the idea. Back to the scientific method. Because it is circular, it is never done and so is science. We can always uncover more details, look at related things and refine our understanding. There's no field where we could ever say, okay, let's pack up, we know now everything, good job everyone. Science has been completely done. Everything in science can be potentially overturned. Nothing is set in stone. However, and it's a big however, it's not likely that this happens for most things. Most things have been shown so often that the chance that we will find out that water actually boils at 250 degrees centigrade at sea level at normal pressure is close to zero. But if researchers would be able to show that strange behavior of water, it is in the nature of science to include that result in our understanding, even if that breaks some other ideas that we have about the world. That is what sets science apart from dogma. New evidence is not frowned upon and rejected, but welcomed and integrated into our current understanding of the world. Enough about the scientific system. Let's talk about scientists. You might be surprised to hear, but most researchers are actually people. Other people who are not researchers tend to forget that, especially when they talk about the science that the researchers do. That goes both ways. There are some that believe in the absolute objective truth of science, ignoring all influence researchers have on the data. And there are others who say that science is lying about things like vaccinations, climate change or infectious diseases. Both groups are wrong. Researchers are not infallible demigods that eat nature and poop with them. They are also not conspiring to bring harm to society in search for personal gain. Trust me. I know people who work in pesticide research. They are as miserable as any other researcher. Researchers are people and so they have thoughts and ideas and wishes and biases and faults and good intentions. Most people don't want to do bad things and inflict harm on others and so do researchers. They aim to do good things and make lives of people better. The problem with researchers being people is that they are also flawed. We all have cognitive biases that shape the way we perceive and think about the world. And in science there is a whole list of biases that affect the way we gather data and draw conclusions from it. Luckily, there are ways to deal with most biases. We have to be aware of them, address them and change our behaviour to avoid them. What we can't do is deny their impact on research. Another issue is diversity. Whenever you put a group of similar people together, they will only come up with ideas that fit within their group. That's why it is a problem when only white men are dominating research leadership positions. Hold on, some of you might shout. These men are men of science. They are objective. They use the scientific method. We don't need diversity. We need smart people. To which I answer, here is a story for you. For more than 150 years, researchers believe that only male birds are singing. It fits the simple idea that male birds do all the mating rituals and stuff, so they must be the singers. Just like in humans, female birds were believed to just sit and listen while the men shout at each other. In the last 20 years, this idea was debunked. New research found that also female birds sing. So how did we miss that for so long? Another study on the studies found that during these 20 years that overturned the dogma of male singing birds, the researchers changed. Suddenly, more women took part in research. And research happened in more parts of the world. Previously, mostly men in US, Canada, England and Germany were studying singing birds in their countries. As a result, they subconsciously introduced their own biases and ideas into the work, and so we believe for a long time that female birds keep their beaks shut. Only when the group of researchers diversified, we got new and better results. The male researchers didn't ignore the female songbirds out of bad faith. The men were shaped by their environment, but they didn't want to do bad things. They just happened to oversee something that someone with a different background would pick up on. What does this tell us about science? It tells us that science is influenced consciously or subconsciously by internal biases. When we talk about scientific results, we need to take that into account. Especially in studies regarding human behavior, we have to be very careful about experiment design, framing and interpretation of results. If you read about science, it makes bold claims about the way we should work, interact or communicate in society, that science is prone to be shaped by bias and you should be very careful when drawing conclusions from it. I personally would rather wait for several studies pointing in a similar direction before I draw major conclusions. I'll link to a story about the publication about the influence of female mentors on career success that was criticized for a couple of these biases. If we want to understand science better, we also have to look at how someone becomes a scientist and I mean that in a sense of professional career. Technically, everybody is a scientist as soon as they test a hypothesis, observe the outcome and repeat, but unfortunately most of us are not paid for the tiny experiments during our day to day life. If you want to become a scientist, you usually start by entering academia. Academia is the world of universities, colleges and research institutes. There is a lot of science done outside of academia, like in research and development in industry or by individuals taking part in DIY science. As these groups rarely enter the spotlight of public attention, I will ignore them today. Sorry. So this is a typical STEM career path. You begin as a bachelor or master student. You work for something between three months and a year and then woohoo, you get a degree. From here you can leave, go into the industry, be a scientific researcher at a university or you continue your education. If you continue, you are most likely to do a PhD. But before you can select one of the exciting options on a form when you order your food, you have to do research. For three to six years, depending on where you do your PhD, you work on a project and most likely will not have a great time. You finish with your degree and some publications. A lot of people leave now, but if you stay in research, you'll become a postdoc. The word postdoc comes from the word dog as in doctorate and post as in you have to post a lot of application letters to get a job. Postdocs do more research, often on broader topics. They supervise PhD students and are usually pretty knowledgeable about their research field. They work and write papers until one of two things happen. The German Wissenschaft Zeitvertrags Gesetz bites them in the butt and they get no more contract or they move on to become a group leader or professor. Being a professor is great. You have a permanent research position, you get to supervise and you get to talk to many cool other researchers. You probably know a lot by now, not only about your field, but also many other fields in your part of science as you constantly go to conferences because they have good food and also people are talking about science. Downside is you're probably not doing any experiments yourself anymore. You have postdocs and PhD students who do that for you. If you want to go into science, please have a look at this. What looks like terrible city planning is actually terrible career planning as less than one percent of PhDs will ever reach the level of professor, also known as the only stable job in science. That's also what happened to me, I left academia after my PhD. So what do we learn from all of this? Different stages of a research career correlate with different levels of expertise. If you read statements from a master's student or a professor, you can get an estimate for how much they know about their field and in turn for how solid their science is. Of course, this is just a rule of thumb. I have met both very knowledgeable master's students and professors who knew nothing apart from their own small world. So whenever you read statements from researchers independent of their career stage, you should also wonder whether they represent a scientific consensus. Any individual scientist might have a particular hot take about something they care about, but in general they agree with their colleagues. When reading about science that relates to policies or public debates, it is a good idea to explore whether this particular researcher is representing their own opinion or the one of their peers. Don't ask the researcher directly though, every single one of them will say that of course they represent a majority opinion. The difference between science and screwing around is writing it down, as Adam Savage once said. Science without publications is pretty useless because if you keep all that knowledge to itself, well congrats you are very smart now, but that doesn't really help anyone but you. Any researcher's goal therefore is to get their findings publicly known so that others can extend the work and create scientific progress. So let's go back to my amazing B research. I did the whole experiment again with proper controls this time and now I want to tell people about it. The simplest way to publish my findings would be to tweet about it, but then a random guy would probably tell me that I'm wrong and stupid and should go f**k myself. So instead I do what most researchers would do and go to a scientific conference. That's where researchers hang out, have a lot of coffee and sit and listen to talks from other researchers. Conferences are usually the first place that new information becomes public. Well, public is a bit of a stretch, usually the talks are not regularly recorded or made accessible to anyone who wasn't there at the time. So while the information is pretty trustworthy, it remains fairly inaccessible to others. After my conference talk, the next step is to write up all the details of my experiment and the results in a scientific paper. Before I send this to an editor or at a scientific journal, I could publish it myself as a preprint. These preprints are drafts of finished papers that are available to read for anyone. They are great because they provide easy access to information that is otherwise often behind paywalls, they are not so great because they have not yet been peer reviewed. If a preprint hasn't also been published with peer review, you have to be careful with what you read, as it is essentially only the point of view of the authors. Peer review only happens when you submit your paper to a journal. Journals are a whole thing and there have been some great talks in the past about why many of them are problematic. Let's ignore for a second how these massive enterprises collect money from everyone they get in contact with and let's focus instead on what they are doing for the academic system. I send in my paper, an editor sees if it's any good and then sends my paper to two to three reviewers. These are other researchers that then critically check everything I did and eventually recommend accepting or rejecting my paper. If it is accepted, the paper will be published. I pay a fee and the paper will be available online, often behind a paywall, unless I pay some more cash. At this point, I'd like to have a look at how a scientific paper works. There are five important parts to any paper. The title, the author list, the abstract, the figures and the text. The title is a summary of the main findings and unlike in popular media, it is much more descriptive. Where a newspaper leaves out the most important information to get people to read the article, the information is right there in the title of the study. In my case, that could be Honeybees, A. Piss Melleferra, show selective preference for flower color in viola tricolor. You see, everything is right there. The organisms I worked with and the main result I found. Below the title stands the author list. As you might have guessed, the author list is a list of authors. Depending on the field the paper is from, the list can be ordered alphabetically or according to relative contribution. If it is contribution, then you usually find the first author to have done all the work, all the middle authors to have contributed some smaller parts and the last author to have paid for the whole thing. The last author is usually a group leader or professor. A good way to learn more about research group and their work is to search for the last author's name. The abstract is a summary of the findings. Read this to get a general idea of what the researchers did and what they found. It is very dense in information, but it is usually written in a way that also researchers from other fields can understand at least some of it. The figures are pretty to look at and hold the key findings in most papers and the text has the full story with all the details, all the jargon and all the references that research is built on. You probably won't read the text unless you care a lot, so stick to title, abstract and authors to get a quick understanding of what's going on. Scientific papers reflect the peer-reviewed opinion of one or few research groups. If you are interested in a broader topic like what insects like to pollinate what flower, you should read review papers. These are peer-reviewed summaries of a much broader scope, often weighing multiple points of view against each other. Review papers are a great resource that avoids some of the biased individual research groups might have about their topic. So my research is reviewed and published. I can go back now and start counting butterflies, but this is not where the publishing of scientific results ends. My institute might think that my B counting is not even bad, it is actually amazing, and so they will issue a press release. Press releases often emphasize the positive parts of a study, while putting them into context of something that's relevant to most people. Something like bees remain attracted to yellow flowers despite the climate crisis. The facts in a press release are usually correct, but shortcomings of a study that I mentioned in the paper are often missing from the press release. Because my B study is really cool and because the PR department of my institute did a great job, journalists pick up on the story. The first ones are often journals with a focus on science, like Scientific American or Spektrum der Wissenschaft. Most of the time, science journalists do a great job in finding more sources and putting the results into context. They often ask other experts for their opinion and they break down the scientific language into simpler words. Science journalism is the source I recommend to most people when they want to learn about a field that they are not experts in. Because my B story is freaking good, mainstream journalists are also reporting on it. They are often pressed for time and write for a much broader audience, so they just report the basic findings, often putting even more emphasis on why people should care. Usually climate change, personal health or now COVID. Mainstream press coverage is rarely as detailed as the previous reporting and has the strongest tendency to accidentally misrepresent facts or add framing that researchers wouldn't use. Oh, and then there is your weird uncle who posts a link to the article on their Facebook with a blurb of text that says the opposite of what the study actually did. As you might imagine, the process of getting scientific information out to the public quickly becomes a game of telephone. What is clearly written in the paper is framed positively in the press release and gets watered down even more once it reaches mainstream press. So for you, as someone who wants to understand the science, it is a good idea to be more careful the further you get away from the original source material. While specific scientific journalism usually does a good job in breaking down the facts without distortion, the same can't be said for popular media. If you come across an interesting story, try to find another version of it in a different outlet, preferably one that is more catered to an audience with scientific interest. Of course, you can jump straight to the original paper, but understanding the scientific jargon can be hard and misunderstanding the message is easy, so it can do more harm than good. We see that Hamna with Hobbits who are not epidemiologists who are not people who study epidemics who are making up their own pandemic modeling. They are cherry picking bits of information from scientific papers without understanding the bigger picture and context and then post their own charts on Twitter. It's cool if you want to play with data in your free time and it's a fun way to learn more about the topic, but it can also be very misleading and harmful while dealing with a pandemic if expert studies have to fight for attention with non-expert Excel graphs. It pays off to think twice about whether you're actually helping by publishing your own take on a scientific question. Before we end, I want to give you some practical advice on how to assess the credibility of a story and how to understand the science better. This is then an in-depth guide to fact-checking. I want you to get a sort of gut feeling about science. When I read scientific information, these are the questions that come to my mind. First up, I want to ask yourself, is this plausible and does this follow the scientific consensus? If both answers are no, then you should carefully check the sources. More often than not, these results are outliers that somebody exaggerated to get news coverage or someone is actively reframing scientific information for their own goals. To get a feeling about scientific consensus on things, it is a good idea to look for joint statements from research communities. Whenever an issue that is linked to current research comes up for public debate, there is usually a joint statement laying down the scientific opinion signed by dozens or even hundreds of researchers, like for example from scientists for future. Then whenever you see a big number, you should look for context. When you read statements like, we grow sugar beet on an area of over 400,000 hectare. You should immediately ask yourself, who is we? Is it Germany, Europe, the world? What is the time frame? Is that per year? Is that a lot? How much is that compared to other crops? Context matters a lot and often big numbers are used to impress you. In this case, 400,000 hectare is the yearly area that Germany grows sugar beet on. Weed, for example, is grown on over 3 million hectare per year in Germany. Context matters and so whenever you see a number, look for a frame of reference. If the article doesn't give you one, either go and look for yourself or ignore the number for your decision making based on the article. Numbers only work with framing, so be aware of it. I want you to think briefly about how you felt when I gave you that number of 400,000 hectare. Chances are that you felt a sort of feeling of unease because it's really hard to imagine such a large number. An interesting exercise is to create your own frame of reference. Collect a couple of numbers like total agricultural area of your country, the current spending budget of your municipality, the average yearly income or the unemployment rate in relative and absolute numbers. Keep the list somewhere accessible and use it whenever you come across a big number that is hard to grasp. Are 100,000 Euro a lot of money in context of public spending? How important are 5,000 jobs in context of population and unemployment? Such a list can diffuse the occasional scary big number in news articles and it can also help you to make your point better. Speaking of framing, always be aware who the sender of the information is. News outlets rarely have a specific scientific agenda, but NGOs do. If Shell, the oil company, will provide a leaflet where they cite scary numbers and present research that they funded that finds that oil drilling is actually good for the environment, but they won't disclose who they work with for the study, we all would laugh at that information. But if we read a leaflet from an environmental NGO in Munich that is structurally identical but with a narrative about glyphosate and beer that fits our own perception of the world, we are more likely to accept the information in the leaflet. In my opinion, both sources are problematic and I would not use any of them to build my own opinion. Good journalists put links to the sources in or under the article and it is a good idea to check them. Often however you have to look for the paper yourself, based on hints in the text like author names, institutions and general topics. And then paywalls often block access to the information that you are looking for. You can try pages like ResearchGate for legal access to PDFs. Many researchers also use SciHub, but as the site provides illegal access to publicly funded research, I won't recommend doing so. When you have the paper in front of you, you can either read it completely, which is kind of hard, or just read the abstract, which might be easier. The easiest is to look for science journalism articles about the paper. Twitter is actually great to find those, as many researchers are on Twitter and like to share articles about their own research. They also like to discuss research on Twitter, so if the story is controversial, chances are you'll find some science accounts calling that out. While Twitter is terrible in many regards, it is a great tool to engage with the scientific community. You can also do a basic check up yourself. Where was the paper published and is it a known journal? Who are the people doing the research? And what are their affiliations? How did they do their experiment? Checking for controls and repetitions in the experiment is hard, if you don't know the topic, but if you do know the topic, go for it. In the end, fact checking takes time and energy. It's very likely that you won't do it very often, but especially when something comes up that really interests you and you want to tell people about it, you should do a basic fact check on the science. The world would be a lot better if you'd only share information that you checked yourself for plausibility. You can also help to reduce the need for rigorous fact checking. Simply do not spread any science stories that seem too good to be true and that you didn't check yourself or find an incredible source. Misinformation and bad science reporting spread, because we don't care enough and because they are very, very attractive. If we break that pattern, we can give reliable scientific information the attention that it deserves. But don't worry, most of the science reporting you'll find online is actually pretty good. There is no need to be extremely careful with every article you find. Still, I think it is better to have a natural alertness to badly reported science than to trust just anything that is posted under a catchy headline. There is no harm in double checking the facts, because either you correct a mistake or you reinforce correct information in your mind. So how do I assess whether a source that I like is actually good? When I come across a new outlet, I try to find some articles in an area that I know stuff about. For me, that's plant science. I then read what they're writing about plants. If that sounds plausible, I am tempted to also trust when they write about things like physics or climate change where I have much less expertise. This way, I have my own personal list of good and not so good outlets. If somebody on Twitter links to an article from the not so good list, I know that I have to take that information with a large quantity of salt and if I want to learn more, I look for a different source to back up any claims I find. It is tedious, but so is science. With a bit of practice, you can internalize the skepticism and navigate science information with much more confidence. I hope I could help you with that a little bit. So that was my attempt to help you to understand science better. I'd be glad if you'd leave me feedback or direct any of your questions towards me on Twitter. That's at ScienceJuram. There will be sources for the things I talked about and available somewhere around this video or on my website, juram.schwarzmann.de. Thank you for your attention. Goodbye. There you are. Thank you for your talk. Very entertaining and informative as well as I might say. We have a few questions from here at the commerce. It would be where's the signal, Andrew? I need my questions from the internet. All of them are from the internet. So I would go through the questions and you can elaborate on some of the points from your talk. So the first question, very good. The first question is, is there a difference between reviewed articles and meta studies? To my knowledge, there isn't really a categorical difference in terms of peer review, meta studies. So studies that integrate, especially in the medical field, you find that often they integrate a lot of studies and then summarize the findings again and try to put them in context of one another, which are incredibly useful studies for medical conclusion making. As I said in the talk, it's often very hard to do, for example, dietary studies and you want to have large numbers and you get that by combining several studies together. And usually these meta studies are also peer reviewed. So instead of actually doing the research and going and doing whatever experiments you want to do on humans, you instead collect all of the evidence others did and then you integrate it again, draw new conclusions from that and compare them and weigh them and say, okay, this study had these shortcomings, but we can take this part from this study and put it in context with this part from this other study. Because you make so much additional conclusion making on that, you then submit it again to a journal and it's again peer reviewed and then other researchers look at it and say, yeah, pretty much give their expertise on it and say whether or not it made sense what you concluded from all of these things. So a meta study when it's published in a scientific journal is also peer reviewed and also a very good credible source. And I would even say often meta studies are the studies that you really want to look for if you have a very specific scientific question that you as a sort of non-expert want to have answered because very often the individual studies, they are very focused on a specific detail of a bigger research question. But if you want to know, I don't know, dietary fiber, very good for me. There's probably not a single study that will have the answer, but there will be many studies that together point towards the answer and the meta study is a place where you can find that answer. Very good. Sounds like something to reinforce the research. Maybe a follow up question or it is a follow up question. Is there anything you can say in this regards about the reproducibility crisis in many fields such as medicine? Yeah, that's a very good point. I mean, that's something that I didn't mention at all in the talk because for pretty much like complexity reasons, because when you go into reproducibility, you run into all kinds of sort of complex additional problems. Because yeah, it is true that we often struggle with reproducing often. I actually don't have the numbers how often we fail, but there's this reproducibility crisis that's often mentioned that is this idea that when researchers take a paper that has whatever they studied and then other researchers try to recreate the study. And usually in a paper, there's also a material and methods section that details all of the things that they did is pretty much the instructions of the experiment and the results of the experiment are both in the same paper usually. And when they try to sort of re cook the recipe that somebody else did, there is a chance that they don't find the same thing. And we see that more and more often, especially with complex research questions. And that brings us to the idea that reproduction or reproducibility is an issue and that maybe we can't trust science as much or we have to be more careful. And it is true that we have to be more careful, but I wouldn't go as far and to be like in general like sort of distrustful of research. And that's why I'm also saying like in the medical field, you always want to have multiple studies pointing at something. You will always want to have multiple lines of evidence. Because if one group finds something and another group can't find it, like reproduce it, you end up in a place where you can't really say, did this work now? Like who did the mistake? The first group or the second group? Because also when you're reproducing a study, you can make mistakes. Or there can be factors that the initial research study didn't document in a way that it can be reproduced because they didn't care to write down the supply of some chemicals. And the chemicals were very important for the success of the experiment. And things like that happen. And so you don't know when you just have the initial study and the reproduction study and they have different outcome. But if you have then multiple studies that all look in a similar area and out of 10 studies, 8 or 7 point to a certain direction, you can then be more certain that this direction points towards the truth. In science, it's really hard to say, OK, this is now the objective truth. We found now the definitive answer to the question that we're looking at, especially in the medical field. So that's a very long way of saying it's complicated. Reproduction or reproducibility studies, they are very important. But I wouldn't be too worried to, what's the word here? I wouldn't be too worried that the lack of reproducibility breaks the entire scientific method because it's usually more complex and more issues at hand than just a simple recooking of another person's study. Yes, speaking of more publishing, so this is a follow up to the follow up. The internet asks, how can we deal with the publish or perish culture? Oh, yeah, if I knew that, I would write very smart blog posts and trying to convince people about that. I think, personally, we need to rethink where we do the funding because that's in the end where it comes down to. The issue that I really didn't go into much detail in the talk because also very complex. So science funding is usually defined by a decision making process. At one point, somebody decides who gets the money. And to get the money, they need a qualifier to decide. Like there is 10 research groups or 100 research groups that write a grant and say, hey, we need money because we want to do research. And they have to figure out what they have to decide. Like who gets it because they can't give money to everyone because we spend money in budgets on different things than just science. So the next best thing that they came up with was the idea to use papers, the number of papers that you have to get the measurement or the quality of papers that you have to get the measurement of whether you are deserving of the money. And you can see how that's problematic. It means that people who are early in their research career who don't have a lot of papers, they have a lower chance at getting the money. And that leads to this publisher-perish idea that if you don't publish your results and if you don't publish them in a very well-respected journal, then the funding agencies won't give you money. And so you perish and you can't really pursue your research career. And it's really a hard problem to solve because the decision about the funding is very much detached from the scientific world, from academia. There's multiple levels of abstraction between the people who in the end make the budgets and decide who gets the money and the people who are actually using the money. I would wish for a funding agency to look less at papers and maybe come up with different qualifiers, maybe also something like general scientific practice. Maybe they could do audits of some sort of labs. I mean, there's a ton of factors that influence good research that are not mentioned in papers, like work ethics, work culture, how much teaching you do, which can be very important, but is sort of detrimental to get more funding because when you do teaching, you don't do research and then you don't get papers and then you don't get money. So yeah, I don't have a very good solution to the question, what we can do. I would like to see more diverse funding also of smaller research groups. I would like to see more funding for negative results, which is another thing that we don't really value. So if you do an experiment and it doesn't work, you can't publish it. You don't get paper, you don't get money and so on. So there are many factors that need to change, many things that we need to touch to actually get away from publish or perish. Yeah, another question that is closely connected to that is why are there so few stable jobs in science? Yeah, that's the Wissenschaft-Zeitvertrags-Gesetz, something that I forgot when we got it. I think in the late 90s or early 2000s, that's at least a very German specific answer that defined that this Gazetz, this law put it into law that you have a limited time span that you can work in research. You can only work in research for I think 12 years and there's some like footnotes and stuff around it, but there's a fixed time limit that you can work in research on limited term contracts, but your funding, whenever you get research funding, it's always for a limited time. You always get research funding for three years, six years if you're lucky. So you never have permanent money in a research group. Sometimes you have that in universities, but overall you don't have permanent money and so if you don't have permanent money, you can't have permanent contracts and therefore there aren't really stable jobs. And then with professorships or some group leader positions, then it changes because group leaders and professorships, they are more easily planned and therefore in universities and research institutes, they sort of make a long-term budget and say, okay, we will have 15 research groups, so we have money in the long-term for 15 group leaders. But whoever is hired underneath these group leaders, this has much more fluctuation and is based on sort of short-term money and so there's no stable jobs there. At least that's in Germany. I know that for example in the UK and in France, they have earlier permanent position jobs. They have lecturers, for example, in the UK, where you can, without being a full professor that has like its own backpack of stuff that has to be done, you can already work at a university in the long-term in a permanent contract. So it's a very, it's a problem that we see across the world, but Germany has its own very specific problems introduced here that make it very unattractive to stay long-term in research in Germany. It's true, I concur. So the, coming to the people who do science mostly for fun and less for profit, this question is, can you write and publish a paper without a formal degree in the sciences assuming the research methods are sufficiently good? Yes, I think technically it is possible. It comes with some problems. Like first of all, it's not free. First of all, when you submit your paper to a journal, you pay money for it. I don't know exactly, but it ranges, I think, a safe assumption is between a thousand and five thousand dollars, depending on the journal way you submit to. Then very often it's like some formal problems that I've been recently co-authoring a paper and I'm not actively doing research anymore. I did something in my spare time, helped a friend of mine who's still doing research with some like basic stuff, but he was so nice to put me on the paper and then there's a form where it says like institute affiliation and I don't have an institute affiliation in that sense. So as I'm just a middle author in this paper, I was published there or hopefully if it gets accepted, I will be there as an independent researcher. But it might be that a journal has their own internal rules where they say we only accept people from institutions. So it's not really inherent in the scientific system that you have to be at an institution, but there are these doors, there are these pathways that are locked because somebody has to put in a form somewhere that what institution you affiliate with. And I know that some people who do like DIY science, so they do outside of academia that they need to have in academia partners that help them with the publishing and also to get access to certain things. I mean, in computer science, you don't need specific chemicals, but if you do anything like chemical engineering or biology or anything, often you only get access to the supplies when you are an academic institution. So I know that many people have sort of these partnerships, corporations with academia that allow them to actually do the research and then publish it as well. Because otherwise, if you're just doing it from your own bedroom, there might be a lot of barriers in your way that might be very hard to overcome. But I think if you're really, really dedicated, you can overcome them. Coming to the elephant in said bedroom, what can we do against the spread of false facts, 5G corona vaccines? So they are very, they get a lot of likes and are spread like a disease themselves. And it's very hard to counter, especially in person encounters, these arguments because apparently a lot of people don't are not that familiar with the scientific method. What's your take on that? Yeah, it's difficult. And I've read over the years now many different approaches ranging from not actually talking about facts, because often somebody who has a very predefined opinion on something, they know a lot of false facts that they have on their mind. And you as somebody talking to them often don't have all of the correct facts in your mind. I mean, who runs around with like a back full of climate facts and the back full of 5G facts and the back full of vaccine facts, all like in the same quantity and quality as the stuff that somebody who read stuff on Facebook has in their in their backpack in a sort of mental image of the world. So just arguing on the facts is very hard because people who follow these false ideas, they often they're very quick in making turns and they like throw a thing at you one after the other. And so it's really hard to just be like, but actually debunking fact one and then debunking the next wrong fact. So I've seen a paper where people try to do this sort of on a argumentative standpoint, they say, look, you you're drawing false conclusions, you say, because a, therefore B, but these two things aren't linked in a in a causal causal way. So you can't actually draw this conclusion in so sort of try to destroy their argument on a meta level instead on on the on the fact level. But also that is difficult. And usually people who are really devout followers of false facts, they are also not followers of reasons. So any reason based argument will just not work for them because they will deny it. I think what really helps is a lot of small scale action in terms of making scientific data so making science more accessible. And I mean, I'm a science communicator, so I'm heavily biased. I'm saying like, we need more science communication. We need more low level science communication. We need to have it freely accessible because all of the stuff that you read with the false facts, this is all freely available on on Facebook and so on. So we need to have a similar low level, low entry level for the correct facts. So for the real facts. And this is also it's hard to do. I mean, in the science communication field, there's also a lot of debate how we do that. So we do that over more presence on social media. Should we simplify more or are we then actually oversimplifying like where is the balance? How do we walk this line? So there's a lot of discussion and still ongoing learning about that. But I think in the end, it's that what we need. We need people to be able to just to find correct facts just as easily and understandable as they find the fake fake news and the fakes like we need science to be communicated clearly as a student Cheryl on Facebook as an image that is like that's I don't want to like repeat all of the wrong claims, but something that says something very wrong, but very persuasive. We need to be as persuasive with the correct facts. And I know that many people are doing that by now, especially on places like Instagram, you find more and more people or tick tock, you find more and more people doing very high quality, low level. And I mean that on a sort of jargon level, not on a sort of intellectual level. So very low barrier science communication. And I think this helps a lot. This helps more than very complicated sort of pages, debunking false facts. I mean, we also need these we also need these as references. But if we really want to combat the spread of fake news, we need to just be as accessible with the with the truth. A thing closely connected to that is how do we fine tune our bullshit detectors since I guess people who are watching this talk have already started with the process of fine tuning their bullshit detectors, but when, for example, something very exciting and promising comes along as an example, Christopher Kass or something, how do we go forward to not be fooled by our own already tuned bullshit detectors and false conclusions? I think I think a main part of this is practice, just try to try to look for something that would break the story. Just not for every story that you read, that's that's a lot of work, but from time to time pick a story where you're like, oh, this is very exciting and try to learn as much as you can about that one story. And by doing that, also learn about the process, how you drew the conclusions and then compare your final image after you did all the research to the thing that you read in the beginning and see where there are things that are not coming together and where there are things that are the same and then based on that practice. And I know that's that's a lot of work. So that's sort of the high impact way of doing that by just practicing and just actively doing the checkups. But the other way you can do this is find people whose opinion you trust on topics and follow them, follow them on podcasts and social media on YouTube or wherever. And like, especially in the beginning, when you don't know them well, be very critical about them. It's easy to fall into like a sort of trap here and following somebody who actually doesn't know their stuff. And there are some people, I mean, in this community here, I'm not saying anything new if I say if you follow people like Min Correct, like the Torch incorrect, they are great for a very, I actually can't really pin down which scientific area because in their podcast, they're touching so many different things and they have a very high level understanding of how science works. So places like this are a good start to get a healthy dose of skepticism. Another rule of thumb that I can give is usually stories are not as exciting when you get down to the nitty gritty details. I'm a big fan of CRISPR, for example, but I don't believe that we can cure all diseases just now because we have CRISPR. There's very limited things we can do with it and we can do much more with it than we couldn't do, what we could do when we didn't have it. But I'm not going around and thinking now we can create life at will because we have CRISPR or we can fight any disease at will because we have CRISPR. So that's a general good rule of thumb is just calm down, look what's really in there and see how much or tone it just down like 20% and then take that level of excitement with you instead of going around and being scared or overly excited about a new technology and you think that's been found. Because we rarely do these massive jumps that we need to start to worry or get over excited about something. Very good. So very last question. Which tools did you use to create these NAS drawings? A lot of people won't like me for saying this because this will sound like a product promo. But I used an iPad with a pencil and I used an app to draw the things on there called Affinity Designer because that works very well and also cross-device. So that's how I created all of the drawings. And I put them all together in Apple Motion and exported the whole thing in Apple Final Cut. So this is now the show like a sales pitch for all of these products. But I can say like for me they work very well but there's pretty much alternatives for everything along the way. I can say because I'm also doing a lot of science communication with drawings for the plans and pipettes project that I'm part of. And I can say an iPad with a pencil and Affinity Designer gets you very far for high quality drawings with a very easy access because I'm no way an artist. I'm very bad at this stuff. But I can hide all my shortcomings because I have an undo function in my iPad and because everything's in a vector drawing I can delete every stroke that I made even if I realized like an hour later that this should not be there. I can reposition it and delete it. So vector files and the pencil and an undo function were my best friends in the creating of this video. Very good. There you are. Thank you very much for your talk and your very extensive Q&A. I think a lot of people are very happy with your work and are actually saying in the pad that you should continue to communicate science to the public. That's very good because that's my job. It's good that the people like that. Thank you very much. So a round of applause and some very final announcements for this session. There will be the Herit News Show in the break. So stay tuned for that. And I would say if there are no further, no, we don't have any more time, sadly, but I guess people know how to connect to you and contact the Yoram if you want to know anything more. Thank you.
|
This year saw a major invasion of scientific work into the center of public attention. Scientific results came hot of the press and into the news cycles. For many people, this sudden impact of scientific language, culture and drawing of conclusion clashed with their every day world view. In my talk, I want to prepare the audience for the next wave of scientific influx and help them to get a systemic understanding of academia. How do scientists work? What are the funding and career structures? How do we do science? And, most importantly, how do we do science better? When we talk about scientific results, we assume most of the time that everybody knows how the scientific system produced them. This assumption can be problematic. The communication of risk and uncertainty, for example, varies greatly between the scientific community and society. A misunderstanding of science, scientific language or scientific publishing can lead to very wrong ideas about how things work. In this talk, I want to give a beginner friendly crash course on the scientific system. How does one get into science? Who pays for everything? What is scientific uncertainty? How does a scientific paper work? How can we address issues like bias and lack of diversity in science? I am an eager science communicator, who worked in the past in in molecular biology research and has since left active research to communicate science to the public. My goal is to share my view on the inner workings of science with more people to help them to become more science literate – and ultimately to be able to critically assess communication from researchers, institutions and PR companies. At the end of my talk, I want the audience to have a better understanding not only about the results of research but also the research process. We can only understand and interpret scientific results if we understand how they came to be.
|
10.5446/52103 (DOI)
|
Yeah, and the next talk. I'm very proud to announce we have a speaker who is coming in from sunny California. And he's an attorney. He's working for Harvard. He's doing so many things and he's fighting for our digital rights. And I'm very happy to say hi. Welcome. Thank you. Yeah. And spot the surveillance is the topic. We will see what we haven't seen before. And I'm very happy that you're here and Kurt Abso, please let us know what's up. Thank you. Thank you. Hello, everybody. My name is Kurt Abso. I'm the deputy executive director at General Counsel of the Electronic Frontier Foundation. I'm here to talk to you about observing police surveillance at protests. So why do we want to observe police at protests? Well, because protests are political expression. At the Council of Europe put it, the right of individuals to gather with other people and make their collective voice heard is fundamental to a properly functioning democracy. And this is a right which is protected by the European Convention on Human Rights and other international rights treaties. But surveillance can chill that right. And so knowing what technologies are used can help you understand the threats to your privacy and security, as well as provide tools to advocate for limits on police use of surveillance. Surveillance may chill people's right to express themselves on these public issues. Just as analog surveillance historically has been used as a tool for oppression. Nowadays, policymakers and the public have to understand that the threat posed by emerging technologies is a danger to human rights. And they need to understand this to successfully defend human rights in the digital age. So journalists who are reporting on protests and these action should know the surveillance that is in use. Activists who are advocating for limitations on police use of surveillance need to know what surveillance is being used to effectively advocate. And legal observers may need to document the use of surveillance and protests in order to challenge or police actions at the protest or challenge police policies that are being used after the protest with the footage they've obtained. So when we go up today, we're going to provide a lot of information about various types of surveillance technologies in use by police around the world. We're going to look at what the appearance is, how it works, what kind of data they collect, and how they're used by police. And at the end, we'll have a few other resources available for those who want to dive a little bit deeper on the topic. So police surveillance technology is everywhere. It's on the police themselves, on their vehicles, on the roadways. It could be above you in the air. It's surrounding you in the environment. It can be a lot of different places and you need to know where to look. On police officers themselves, you'll often find it in the form of either body worn cameras or additional devices that they're using, which are basically mobile biometric sensors. Body worn cameras are technology that's come out and become more popular over the last decade or so. And originally it was something that was being used as a way to provide police accountability that give a record of their interactions with the public. And maybe, for example, could be used to show police brutality or maybe deter police brutality. But there are two ways streets. Devices are often used to surveil protesters and the footage may later be used to support arrests and charges. For example, we use this NPR story where after a rally, weeks later, the police identified people through body cam footage and brought action against them for obstructing the roadway, which is part of the civil disobedience of the protest, based on finding them on the body cam footage. They can move in a variety of places. So if you're looking for body worn cameras, you got to look in different places to see where they might be. So a couple places, they might see them on the head, head mounted camera. So it might be on your glasses on the side. It could be a lens right in the center. The center one is pretty hard to find, but the ones on the side or might be part of the glasses or maybe a helmet that they're wearing are generally pretty obvious. These ones, they're not particularly common, but they do happen. Shoulder mounted cameras also a little bit less common, but they have an interesting feature. In this case, we're using the Warrior 360 from Blue Line Animations as an example. And it is a dome camera that looks all directions. So 360 degrees off the officer's left shoulder. On most cameras, like a front facing camera, will capture only 180 degrees. Chest mounted cameras are the most common. Nears are being used very, very widely. We give some examples here from Amsterdam, Madelberg, and from West Midlands Police in the EU, or soon to be not in the EU for a case of Britain. And there are several moon types. Axon, Wolfcom and Watchguard are very common. They operate in similar manners though with some differences. And you can take a look at some of the examples that are available on those companies' web pages where they will explain the product they have and offer and see what matches up for your jurisdiction. Or you can also look in for news articles. Oftentimes, there's a news article about when the first policy to bring body worn cameras is introduced in a particular police department. There are also smartphone based cameras. And these are kind of the low end. It's basically just an Android cell phone using its internal camera with an app that does recording placed in a pocket. So the camera is a little bit above the clock and can see forward. But it's also a very subtle technique. It could be easily confused if you weren't looking with someone just storing their phone in their pocket. It also might be clipped somewhere on their uniform. But if you see anything where the camera is facing outward and it's attached to the officer, there's a good chance that that is a body worn camera. At least that app is in play. Last, the body worn cameras. We'll talk about the semi obscuring cameras. This is an example from a company called Body Worn. It's a product called Body Worn, company called Utility. And it is partially concealed. It basically looks like a button on someone's uniform. That if you're not looking closely, you might not notice. But if you see as you go in, it appears where you would expect actually not to have a button. It's slightly larger. It looks a little bit different. It looks like a camera if you look closely. But if you're looking at a distance and not particularly paying attention, you might not see it at all. In addition to body worn cameras, please we'll often use mobile biometric devices. So these can be handheld scanners or could be a tablet or a camera phone. And in some cases, it just is a camera, which is a pretty good app on it. But we'll see that you can sometimes tell whether they're using a phone or whether they're using as a biometric scanner by the body language. So for example, if the police officer is holding up the phone, trying to capture someone's face, that is most likely because they have a capturing a photo, and they may be connecting that to a facial recognition application. And you also will see mobile fingerprinting. So here's an example in the United Kingdom. They have an app on the officer's phone combined with a fingerprint scanning device. And it takes the people's fingerprints and checks them against some databases. One is a database of everyone the police out of detain, putting it into the database, and then checking against it for new people. And the other one is a database for immigration to collect it at the border when someone comes into the UK. And this allows the police to do a very rapid check of their records on somebody in the field. Some of these devices are multimodal. They'll do both. They'll be able to do fingerprints and take photos for facial recognition. This here is the DataWorks Plus Evolution does both. And that can be convenient for the officers, but it's a little bit more dangerous to some of the rates. And some of the body warm cameras, this example Wolfcom, has a biometric capability built in facial recognition. So it can use its regular camera functions. And of course, all of them take the picture that picture could be uploaded to our database and facial recognition will be done later. But this one is designed to streamline that process. So I'll take a moment as an aside to talk about facial recognition in Europe. Per algorithm watch, the organization says that there are at least 11 police agencies in Europe who use facial recognition. I showed on the map here, the UK Court of Appeal found that automatic facial recognition technology used by the South Wales police was not lawful. However, elsewhere in the UK, they are still using it. The Metropolitan Police is doing in London is doing a live facial recognition throughout the city of London. And it contends that its situation is distinguishable from South Wales. So that doesn't apply to them. We'll see how that turns out. There's also been some pressure on the European Commission to put a ban in place or put restrictions on facial recognition. And in September, there was a quote from the commissioners saying that they were considering whether we need additional safeguards or whether we need to go further and not allow facial recognition in certain cases, in certain areas, or even temporarily, which is not a particularly strong statement, but it is a, at least they are considering the idea and something that one can advocate for in the United States. A number of jurisdictions at the local levels, cities have put restrictions on their police departments so they cannot use facial recognition. It's a growing movement. And while a national or international law that would limit police use of facial recognition would be the best for civil liberties, you can also start at your local level. All right, once we move beyond the police officers or cells, where else? Vehicles and roadways. And this can come up for the vehicles, roadways, both adjacent to the protests and within the protests themselves. So adjacent to the protests is looking at the exits and entrances to the protest areas. And they may use existing ALPR or place new ALPR or ANPR, automated number plate research called ALPR in the United States. These are cameras generally we pointed towards a roadway to where cars will be that are designed to take a picture, determine what the license plate, number plate is, optical character resolution. They will eventually, recognition that we're eventually able to see what it is, check the database, and find out who's registered for that car. And it can be uploaded to a central server for police to search, can add vehicles to a watch list. It is a very powerful tool because many people are using cars to get to and from protests. And even if they're going in a group, at least one member of the group would have the car. And it has been used to go after someone after a protest. So in this case, it was from a number years back, but a citizen in the UK went to a protest and was later pulled over because they had captured the license plate while on the protest, added to a database, and then used that to overload. So if there is a protest, the police might come in and use a portable number plate reader. So here's some examples of what they might look like, maybe they're on a tripod or on a trailer. And they can set these up basically anywhere. It would often be used at the entrance or exit to the zone in which the protest is expected to see who's coming in or who's coming out during the protest time period and try to capture the crowd through their license plates. It also is now becoming more and more common on police commas. You can see a couple of examples we have here. One shows it rather obvious. In the top one, that's a UK police car, and you can see the camera sticks out fairly obvious that they have a camera on the light bar. The lower one from the French police, less obvious. It looks like an ordinary light bar. You might be able to tell that it's a little bit different than some other ones because it has sort of a funny thing in the center, but it's a pretty subtle approach. So there's all kinds. They might also be mounted on the hood or the trunk and may be more or less obvious. But take a look at what the police car's behavior is. If they are driving, for example, slowly down the street next to all sorts of parked cars, it may be that they are doing gritting, a practice known as gritting, where they're looking for capturing every parked car's license plate in a particular zone. You're trying to run slow and steady in order to do that. And then there are the fixed number plate readers. These are often at traffic lights and intersections on the highways. Any sort of high speed toll road will have them. They also are used for other purposes like to establish fines, to check border crossings. They are very common fixtures on road ways. So when a protest happens in a zone that already has them, the police will be able to access that information and know who entered or exited that area to move around. All right. And then within the protest itself, they may be adding additional surveillance capacities. So in this example, we have a Santa Fe police department knew about a protest that was protesting a statue. And there's a question like maybe some people wouldn't take action and remove the statue. So in order to capture that through surveillance, they placed this trailer, which has a number of camera and audio capabilities, and just rolled it in right next to the statue to capture the protest action. And these cameras can come in a variety of forms. In this case, we're going to watch towers. Personal control cameras can be in, personnel controlling the cameras can be in the watch tower or they can be operating remotely. As you can see, they are using a scissor jack to raise it about that ban. The other one is an assembly. It's not easy for someone to get in and out of there. So it may have a person, but it's somewhat inconvenient to actually have a person inside these watch towers. But it's much more convenient to use their built-in surveillance capabilities and remotely observe the area around the watch tower with those cameras. And then there are also the pure surveillance units. This example here showing four cameras, raised pole, and just adding surveillance capability basically to anywhere. And some of the, some of them are much more complex, thermal imaging cameras. Thermal imaging often comes from the leading company is FLIR, that stands for Forward Looking Infrared. FLIR system makes a lot of these devices and makes them available. Police departments, thermal imaging cameras allow the police to be able to conduct surveillance after dark, where the lighting is poor, where they might not be able to identify individuals very easily. Instead, they can use their heat signature and be able to continue to monitor the protests when the lighting conditions are less. And a lot of things not a protest will happen at night. Candlelight and vigils are very common place. So police will be looking to thermal imaging to make sure that they have strong surveillance capabilities after dark. Another thing you might see around a protest is an emergency command vehicle. These are often massive bus-sized vehicles and they do have some surveillance capabilities. They might have some cameras, but more often they are commanded control. So they are the places where somebody would be receiving footage from cameras and operating cameras remotely, making communications with other people in the field. Though they also may have some built-in capabilities and they may provide the focal point where the local connection, they're getting information from local devices and then they have the uplink in the command center. One thing I wanted to point out is a common misconception or something that comes up a lot when people are concerned about police surveillance is they'll see an unmarked vehicle of man with no windows. It may even have some antennas or satellites. And while that is possibly an undercover police vehicle, you shouldn't assume that that vehicle belongs to law enforcement. That could very easily be a news media vehicle. News media also goes to protests. They also have satellite uplinks and antennas. They look very similar. And in some cases, the media has a security situation. They're worried that there may be theft of equipment and they have unmarked vans. So it is worth noting that there is an unmarked vehicle, but you shouldn't necessarily assume that it is a police unmarked vehicle. Also, sometimes people see, especially they see, some antennas or satellite dish on a vehicle, that maybe that's where a stingray is. This is a misconception. Stingrays are pretty small and they don't require an external antenna to operate. You could put a stingray inside a trunk of a car, maybe about briefcase sized. So it would be unlikely that if you're going to use a stingray or a cell-side simulator or MC catcher that you wouldn't want to put it in a vehicle, you don't need to put it in a vehicle that has its own antenna. There have been not very much documentation of these technology been used in the U.S. domestic protests. They have been used, we know, in some protests in more authoritarian countries. So it's unclear how often they will be are being used. And they are very dangerous things, simple. So an MC catcher, it is able to determine what cell phones are nearby, get a unique identifier with that cell phone, and in many cases be able to use that information to determine what individuals present in the protests. And that information has been used after some protests in Ukraine, for example, to send a text message to people all the time, remember, you know, we're on to you, we know you were there, which can be very intimidating to individuals. But the challenge is if you're trying to observe police surveillance on every desk, it is hard for you to observe it because they are often hidden. You may be able to find out more information later through investigative journalism or public records or news reports. If somebody is prosecuted using that information, it may become obvious, but it's difficult to see at the protest itself. So next category, looking up in the sky. There are lots of forms of aerial surveillance along with the agency will surveil protests from above using traditional aircraft with onboard pilots, and as well as smaller remotely operated aerial systems, drones. Law enforcement may also use these air-convicts to communicate with the crowd, to use the loudspeakers to send a message to the crowd, order them to disperse. And we've seen this actually drones with loudspeakers being used by the German police in order to tell people to stay apart. Corona, that same technology can be used for protests. And these plans and drones will often be equipped with high definition cameras, capable of either an extremely wide angle to get the whole scene or an extreme zoom where they might be able to zoom in on a particular person, a particular license plate, and then use that data later partnering that the aircraft with license plate recognition, face recognition, video analytics, and even a cell-side simulator inside the aircraft. And as you know, we know this has happened in a recent protest in Texas, a Texas police drone caught some footage of a protester allegedly throwing a water bottle. They took that video, they took the picture, put it out, offered a cash award, and anonymous tip to turn the kid in, and protest was prosecuted. So police are definitely using these things to gather information at protests. So a common method for especially larger police departments is fixed wing aircraft. For smaller ones, they may use private contractors to provide these fixed wing craft. So this is an example of the kind of plane used by a company called Persistent Surveillance Systems. It rents out a plane like this. We're not this exact one. If you look up that tail number, it's going to be two different companies, but this same model of plane assessment 207. And these will circle around the protest using their cameras to observe the protesters below. An advantage of planes is they can often circle for quite a long time and provide a wide view of the area. Also helicopters. Helicopters will often be seen hovering over a protest. They are a little bit easier to maneuver and be able to go backwards and forwards over the protest and are used by police to continually observe. And we use two examples here. One of them from the Oakland Police Department. The other one from the Rhyanland Police Department. In both cases, they have a flur attached to the helicopter, a forward-looking infrared that would allow them, in addition to regular capabilities, to use thermal imaging to follow someone at a protest or follow what's going on after dark. You can also see that some helicopters will have spotlights so that they can signal to officers on the ground who to follow, who to pay attention to. And another thing for both fixed wing and helicopters. You look for the tail number. In most jurisdictions, they're required to have tail number visible. And then you can look up that tail number on services like Flight Aware and be able to find out further information about what that plane has been doing, what the helicopter has been doing, as well as the ownership. Finally, drones. Drones are becoming very commonplace because they're getting cheaper all the time and have many additional capacities. Drones are also known as unmanned aerial vehicles, UAVs, UAS, unmanned aerial systems. And a lot of police departments are getting them for their capabilities using most commonly a quad rotor. And they can be controlled by remote control, have a camera built into this, and be useful for getting above the scene view. So one way to spot it, well, first of all, just listen to it. They make kind of a distinctive noise. Sometimes they'll be marked as police. You also look for the pilots operating nearby. So oftentimes a drone, well, first of all, sometimes they're labeled, like in the upper left there, it says police drone operating. Pretty easy to identify. Other times they might have like drone, UAV, aviation unit on their uniform or a nearby police vehicle. The other thing is that if you identify a drone, they're often within line of sight, is going to be the operator. So once you see the drone, look around and see if someone has the remote controls on their hand, is looking up at the drone, you can probably identify the operator and you'll look for information they might have on their uniform about who is operating that particular drone. But also keep in mind, both for drones and other aircraft, that it's not necessarily the police. Journalists and activists will often fly drones over protests. News helicopters for a large protest are going to be more common than police helicopters. And many times they are labeled, which is a picture of the BBC News Copter. But this means that just as you see a helicopter that has both a camera and is flying over the protest, that does not necessarily mean that it's a police helicopter. Also another technology, which it's actually not very commonplace outside of protests in war zones, but the drone killer technology, which is basically a ray gun that knocks drones out of the sky, sending radio signals to interfere with the drone's operation and cause it to fall and crash. These have been used in Iraq and Afghanistan and the technology could be starting to be used, but we really haven't seen it used more frequently. I'm sure I'll tell you about it because, oh my god, drone cameras. All right, last place to look for police technology in the environment around you. There will be, in many places, camera networks. So a lot of the cameras that you'll see in a neighborhood will have, will be private cameras, will be police cameras, will be cameras being used by city non-police agencies. There can be a lot of cameras. This also means that you're trying to observe what cameras are going on. There's going to be too much information. There'll be so many cameras in many areas that you can spend all your time documenting and observing the cameras and this other things. So you might not want to spend all your time paying attention to that because you can go back later at any point and see the fixed cameras. But there are a couple of things that I'm getting one more. So first, identifying them. There are too many different brands to identify, but here's some information about the kinds of cameras that are available. Bullet cameras are directional so you can sort of see which way it's pointing and what it will be covering from that. Then you have dome cameras which are designed so you can't see which way it's pointing or at least you can see maybe somewhere in this area, 180 degrees, but the exact direction it's pointing is obscured by the dome. Pan-tilt zoom cameras can change which way they're pointing. They can sometimes be coupled with a dome camera so that the dome camera can both change the way it's looking and obscure which way it's looking. Thermal imaging cameras and ALPR cameras are also going to be common at fixed locations. ALPR, a lot of what we do with traffic control. Thermal is actually not as common and is mostly used as a technology that is on vehicles, is kind of expensive, but in this case, the picture shown is a thermal imaging camera so sometimes people will go to that additional expense. One subcategory of all the cameras that are in the environment are going to be police observation devices. This is the category of sets of sensors which are operated by the police and they may include multiple cameras, gunshot detection, facial recognition. For example, in the United Kingdom, just said the city of London is doing live facial recognition. Police observation devices are a collection of these cameras in one location. Sometimes they're marked as police, sometimes they are not. The way you would suspect that it's a police observation device is if it has a lot of different sensors in one location trying to cover the whole ground around, then that is the kind of thing you would see most frequently from a police observation. Finally, smart street lights. Now, smart street lights have a number of wonderful applications. Some initiatives like in the US, smart cities, in the EU, the E Street initiative are imploring cities to use smart street lights because they can turn down the power usage when the light is less needed. There are some advanced ones. A project by the Ardholt University of Applied Sciences has a technology which will use motion detection, sound detection, being able to tell if there are people walking nearby and brighten their path for them. Sounds great, but the same kinds of technology being able to detect motion, being able to have audio signals, video signals can be used for surveillance. So here on the slide we show the smart lighting capabilities being advertised by Intel. In addition to some things that you might expect like being able to adjust it for traffic patterns, provide better lights, they talk about other things, crime investigations, monitor parking violations, safety announcements that are coming from the smart cameras. So all of these technologies are possible and hopefully this will not become a commonplace use, but if it is, it would mean that a surveillance device is everywhere along every street when they're putting these devices in. If you're blanking a city, you're blanking a city with surveillance. So has it been used? Yes. The city of San Diego had a number of protests surrounding the protests around George Floyd and they used at least 35 times. They searched the information gathered through the smart streetlight network for evidence and criminal cases coming out of that protest. So what additional resources are there? There's plenty of additional resources if you want to drive in more and I encourage you to take this only as a starting point. There's a lot more to learn. So we'll start out with a very important resource. If you're somebody who's going to go, whether as an activist, as a protester, as a journalist, you should prepare yourself for some surveillance, self-defense. ssd.eff.org. We have an attending a protest guide. You can go there and learn important tips when protecting yourself when you're going to the protest. Put your device with full disc encryption, a strong, unique password, turning off the biometric unlock, use end-to-end encryption for messages and calls, walking, or taking a bicycle to get to the protest instead of a vehicle which can be subject to an ALPR and PR device. You wear a mask. You should wear a mask for COVID anyway, but if you're going to wear a mask, get a bigger, the larger the mask, the more it protects you. There's also recently a study that showed that they're making efforts to try and make facial brick recognition continue on despite people's uses of masks. There's a study that showed that red and black masks were harder for the AI to be able to determine who was behind the mask. So wear a red and black mask. To get one that covers more of your face like a bandana, it's going to be harder for the facial recognition conference. Do some things to protect yourself both from COVID and from surveillance. If you want to also just try and practice it out, you can go to our spot the surveillance. This is an online program. You can use a desktop version or virtual reality version where it places you in a virtual street corner with some surveillance devices nearby and you can look around and try to identify all the surveillance devices that you see. It takes just a few minutes to go through the exercise, but it's a good way to practice your skills and identify what surveillance might be around on the street. If you want to get a lot more information about any of these devices, go to EFF's street level surveillance project, EFF.org, street level surveillance. This will provide more detailed information about various technologies that are in use. That can be a good starting point, especially if you have found out what is being used in your jurisdiction. You can go there and find out more about it. You can also find out just what is going on more generally with these kinds of technologies. EFF.org slash SLS. Thank you that comes to the close of my talk. Thank you for tuning in. Now let me turn it over to my future self for Q&A. Welcome back. Thanks. Thanks so much, Kurt. We have some time for questions and it's getting more and more. I'm just tearing up. Okay. Are there devices, apps, or services developed or run by private companies? And who makes sure the data is not directly sold to third parties? So, yes, there are private networks. I mean, one of the things we talked about just now is there's a lot of private camera networks that are providing information to the police. Sometimes private networks go into a registry where police are organized as people volunteer, put their information into a registry so they are explicitly saying they're going to turn over their information to the police. Other things like Amazon's Ring camera, they have been promoting it as an anti-theft tool, trying to stop packaged theft people's doors. But this also is creating an opticon of everyone's doorbell camera. If they're all using Ring, we'll get the video and we'll provide it to the police. And many of these organizations, if they're a larger one, they will have some privacy practices, policies. But by and large, they will talk about the privacy of the person who owns the KS Smart device and not really consider the bystanders, the people walking by. So, you have a doorbell camera at your front door that can hear audio, so maybe someone can ring your bell and say hello. It will also capture people walking by. And those people's privacy is important and should be considered. All right. Then we have, what help do we have against all this? Which best case legal countermeasures do we have when attending protests? And another one which I would connect directly, is it possible to intervene against surveillance based on laws or presumption of innocence? I don't know if German laws are meant, but maybe you still can say something. Well, I mean, so there's many different laws that might be an issue. I mean, we have an international audience here. But I think there are some principles of basic human rights principles that apply for many jurisdictions. But I would say actually one of the most effective tools to push back against this kind of police surveillance is working locally with the, like a city, the mayor, the city council, and a number of locations have passed rules about what their police can do against their citizens. So putting limitations on what police can do at the local level where your activism in the city, which you live, taking things to your representative government and saying, we need to have some limitations on this. We need to have it in civilian controls where the police themselves are not deciding what technologies to use, but it has to pass through an elected representative. And I think that is probably one of the most effective ways to at least start change where you live. But you can also try and promote that to national legislature, state legislatures, go up several levels. And one of the things that we hope comes out of this guide, people getting more information on what kind of surveillance is available, is so that they can go to their representatives, go through the political process with the information of what tools to use. Ah, I saw you use by damp, they have drone flying above, go to your representative and say, we need to make sure that the information that they're gathered is being used in a manner consistent with human rights principles. And we need civilian control from the local government on how to do it. Who is controlling the control of the stencils? Yes. Okay, we have more questions. Okay. So the police operate equipment like a PR reader, in the catchers, etc. that get information that they could get in a cheaper way, like reading traffic signs or license plays or cell info from operators. Is there a reason for that? Especially concerning EU, because US differs a lot. And another question, has police in EU, US been known to use illegal or questionable tax for surveillance? So I think I'll hurt the first question about, you know, using things like ANPR to determine license plates. This technology is common in the European Union, though by and large it is being put in place for other reasons, not to get after protesters necessarily. They are looking for, you know, making sure that people are paying a toll or maybe a speed trap on the Autobahn, where it takes a picture of the license plate of anyone traveling over a speed limit in the places that have speed limits, only partly on the Autobahn. And I think also it's being used for enforcement of things like traffic citations, your car is parked in a location too long, they know who to send the bill to. And I think these technologies could be repurposed for surveillance. And that's what we really need is policies that are ensuring that if these things are being used for a purpose that the sort of the citizenship agrees with in that jurisdiction to enforce parking, for example, that it's not also being repurposed against political activities and being used at a wider scale than it was envisioned. Also, maybe, you know, it's not a good thing to have perfect parking enforcement. You know, a lot of parking fines were based on the notion that like you might not get caught every time. And when you change it where a system where previously the fines were set with the notion that a lot of people would get away with it, you had to like make an example of those who didn't. And then you change that to perfect enforcement because the computer, the ANPR system surveillance knows exactly the minute that a fine is due and then assesses that fine. That actually changes the dynamic of power between the citizenship and the state significantly. And it will often be phrased in forms of, well, we're just trying to enforce the existing laws. How would you be against that? But really, it changes the dynamic. And it's something that for those who want to be an activist on this, again, talk to your local jurisdiction and try and make sure that these things have safe and sane policies that respect human rights. So I would interpret that like prevention of, don't come into the idea that you need to protect your data, right? Yeah. And just turning to the other one, you know, do we have information about whether police are misusing these technologies? So I mean, there's some isolated examples where people have misused their technologies. And I used a couple of them in the slide. So there was someone who went to a political protest, their car was put into a database they got pulled over later. And then also in South Wales, the court found that there the police use of facial recognition was in violation of UK law, though as I noted, not, you know, the Metropolitan Police in London don't agree with that. They say it doesn't apply to them. And I think actually use of facial recognition technologies is a very tempting thing by the police. They want to use it as much as possible, make it easy for them. And I think you will see that. But the other piece of this is unless there are rules that say here are limitations on how you can use these technologies, then they can use them without having to risk violating them. So we need to have those rules in place. I hope that the Council of Europe puts a at least a moratorium on facial recognition for use for police. And, you know, until we can figure out how to use this technology safely, like it's kind of cool that you can unlock your phone or your face without having to type in a password. But we want to make sure that technology is used properly. Okay. So I think you're going to be around in the 2D world. You're going to explore that, you told me before. Yeah, there's more questions. I hope maybe maybe you find him in the 2D world and you're just asking there. Thanks so much. Thanks so much. Have a nice having you. Bye, Kurt.
|
he Electronic Frontier Foundation’s Kurt Opsahl with show you how to identify surveillance technologies that law enforcement may use at protests and other public gathering to spying on people exercising their fundamental rights. Learn how to spot the surveillance so you can advocate effectively for the policies necessary to protect your rights and bring transparency to the police surveillance. If you attend a protest, demonstration or any mass gathering in a public space, the police are probably surveilling you. Whether it’s sophisticated facial recognition, ubiquitous video recording, or the instant analysis of our biometric data, law enforcement agencies are following closely behind their counterparts in the military and intelligence services in acquiring privacy-invasive technologies, from automated license plate readers to body-worn cameras to drones and more. In this talk, Kurt Opsahl with show you how to identify surveillance technologies in use: • Where to look for these devices • How these technologies look • How these technologies function • How they are used by police • What kind of data they collect • Where to learn more about them Knowledge is power. Knowing what technologies are in use can help you understand the threats to your privacy and security, as well as tools to advocate for limits on police use of surveillance that may chill people’s rights to express themselves on public issues. Just as analog surveillance historically has been used as a tool for oppression, policymakers and the public must understand the threat posed by emerging technologies to successfully defend human rights in the digital age.
|
10.5446/52105 (DOI)
|
Our next speaker, Aliza Esaj is an independent vulnerability researcher and has a notable record of security research achievements such as this year day initiative Silver Bounty Hunter Award 2018. Aliza is going to present her latest research on the Qualcomm Direct Protocol which is found abundantly in Qualcomm hexagons based cellular modems. Aliza, we're looking forward to your talk now. This is Alice Esaj. You're attending my presentation about Xcelium Direct Chaos Communication Congress 2020 remote experience. My main interest as an advanced vulnerability researcher is complex system and hardened systems. For the last 10 years I have been researching various classes of software such as Windows kernel, browsers, JavaScript engines and for the last three years I was focusing mostly on hyper users. The project that I'm presenting today was a little side project that I made for destruction a couple of years ago. The name of this talk Advanced hexagon diag is a bit of an understatement in the attempt to keep this talk a little bit low key in the general internet because a big part of the talk will actually be devoted to general vulnerability research in base bands. But the primary focus of this talk is on the hexagon diag also known as QCDM Qualcomm diagnostic manager. This is a proprietary protocol developed by Qualcomm for use in their base bands and it is included on all Snapdragon socks and modem chips produced by Qualcomm. Modern Qualcomm chips run on a custom silicone with a custom instructions set architecture named QDIS P6 hexagon. This is important because all the diag handlers that we will be dealing with are written in this instruction set architecture. As usually with my talks I have adjusted the materials of this presentation for various audiences for the full spectrum of audiences. Specifically the first part of the presentation is mostly specialized for research directors and high-level technical stuff. And the last part is more deep technical and it would be mostly interesting to specialized vulnerability researchers and low-level programmers that somehow are related to this particular area. Let's start from the top level overview of cellular technology. This mind map presents a simplified view of various types of entities that we have to deal with with respect to base bands. It's not a complete diagram of course but it only presents the classes of entities that exist in this space. Also this mind map is specific to the clean side equipment, the user equipment and it completely omits any server side considerations which are a role in their own. There exists quite a large number of cellular protocols on the planet. From the user perspective this is simple. This is usually the short name 3G4G that you see on the mobile screen. But in reality this simple name, the generation name encodes, may encode several different distant technologies. There are a few key points about cellular protocols that are crucial to understand before starting to approach this area. The first one is the concept of a generation. This is simple. This is simply the 1G2G and so on, the generic name of a family of protocols that are supported at a particular generation. Generation is simply a marketing name for users. It doesn't really have any technical meaning and generations represent the evolution of cellular protocols in time. The second most important thing about cellular protocols is the air interface. This is the protocol which actually, this is the lowest level protocol which defines how exactly that cellular signal is digitized and read from the electromagnetic wave and how exactly different players in this field divide the space. Historically there existed two main implementations of this low-level protocol, TDMA and CDMA. TDMA means time division multiple axis which basically divides the entire electromagnetic specter within the radio band into time slots that are rotated in a round-troll bin manner by various mobile phones so that they speak in turns. TDMA was the base for the GSM technology and GSM was the main protocol used on this planet for a long time. Another low-level implementation is CDMA. It was a little bit more complex from the beginning. It's decoded as co-division multiple axis and instead of dividing the specter by in time slots and dividing the protocol in bursts, CDMA uses pseudo-random codes that are assigned to mobile phones so that this code can be used as an additional randomizing mask against the modulation protocol and multiple user equipments can talk on the same frequency without interrupting each other. Notable here that CDMA was developed by Qualcomm and it was mostly used in the United States. So at the level of 2G there were two main protocols GSM based on the TDMA and CDMA1 based on the CDMA. The third generation of mobile protocols, these two branches of development were continued. So GSM evolved into UMTS while CDMA1 evolved into CDMA2000. The important point here is that UMTS have at this point already adopted the low-level air interface protocol from the CDMA and eventually at the fourth generation of protocols these two branches of development come together to create the LT technology and same for the 5G. This is a bit important for us as from the offensive perspective because first of all all these technologies including the air interfaces represent separate bits of code with separate parsing algorithms within the baseband firmware and all of them are usually presented in each baseband regardless on which one do you actually use, does your mobile provider actually support. Another important and non-obvious thing from the offensive security perspective here is that because of this evolutionary development the protocols are not actually completely distinct. So if you think about LT it is not a completely different protocol from GSM but instead it is based largely on the same internal structures and in fact if you look at the specifications they some of them are almost directly relevant the specifications of the GSM to G some of them are still directly relevant to some extent to LT. This is also important when you start analyzing protocols from the offensive perspective. The cellular protocols are structured in a nested way in layers. Layers is the official terminology adopted by the specifications. With the exception of level 0 here I just edited for convenience but in the specifications layers start from 1 and proceed to 3. From the offensive perspective the most interesting is level 3 as you can see from the screenshot of the specifications because it encodes most of the high-level protocol data such as handling SMS and GSM. This is the part of the protocol which actually contains interesting data structures with TLV values and so on. When people talk about attacking basements they usually mean attacking basements over the air. The OTA attack vector which is definitely one of the most interesting but let's take a step back and consider the entire big picture of the basement ecosystem. This diagram presents a unified view of generalized architecture of a modern basement with attack surfaces. First of all there are two separate distant processors the AP application processor and the MP which is mobile processor it may be an either a DSP or another CPU. Usually there are two separate processors and each one of them runs as separate operating system. In case of the AP it may be Android or iOS and the basement processor would run some sort of a real-time operating system provided by the mobile vendor. Important point here that on modern implementations basements are usually protected by some sort of secure execution environment maybe trust on androids or sepos on Apple devices which means that the privileged boundary which is depicted here on the left side is dual sided so even if you have kernel access to the Android kernel you still are not supposed to be able to read the memory of the basement or somehow intersect with this operation at least on the modern production smartphones and the same goes around to the basement which is not supposed to be able to access to application processor directly. So these two are mutually distrusting entities that are separated from each other and so there exists a privileged boundary which is which represents an attack attack surface. Within the real-time operating systems there are three large attack surfaces starting from right to left. The rightmost gray box represents the attack surface of the cellular stacks. This is the code which actually parses the cellular protocols. It usually runs in several real-time operating system tasks. And this part of the attack surface handles all the layers of the protocol. There is a huge amount of parsing that happens here. The second box represents various management protocols. The simplest one to think about is the AT command protocol. It is still widely included in all basements and it's even usually exposed in some way to the application processor so you can actually send some AT commands to the cellular madam. A bit more interesting is the vendor-specific management protocols. One of them is the DIAP protocol because the basements, modern basements are very complex so vendors need some sort of specialized protocol to enable configuration and diagnostics for the OMS. In case of Qualcomm for example, DIAP is just one of the many diagnostic protocols involved. The third box is what I call the RTO score. It is various core-level functionality such as the code which implements the interface to the application processor. On the side of the application operating system such as Android, there are also two attack surfaces that are attackable from the baseband. The first one is the peripheral drivers because the baseband is a separate hardware peripheral so it requires some specialized drivers that handle IO and such things. The second one is the attack surface represented with various interface handlers because the baseband and the main operating system cannot communicate directly. They use some sort of a specialized interface to do that. In case of Qualcomm, this is shared memory and so these shared memory implementations are usually quite complex and they represent an attack surface on both sides. Finally, the third piece of this diagram is in the lowest part. I have depicted two great boxes which are related to a trusted execution environment because typically modern runs as a trustlet in a secure environment. So technically, the attack surfaces that exist within the trust zone or related to it also can be useful for baseband offensive research. Here we can distinguish at least two large attack surfaces. The first one is the secure manager call handlers which is the core interface that handles calls from the application processor to the trust zone. The second one are the trustlets, the separate pieces of code which are executed and protected by the trust. On this diagram, I have also added some information about data codecs. I'm not sure if they are supposed to be in the R2S core because these things are directly accessible from the cell or stacks usually, especially ASN1 which I have seen some bugs or reachable from the over-the-air interface. On this diagram, I have shown some example of vulnerabilities. I will not discuss them in details here since it's not the point of the presentation, but at least the ones from the Poundtore you can find the write-ups on the internet. To discuss baseband offensive tools and approaches, I have narrowed down the previous diagram to just one attack surface, the over-the-air attack surface. This is the attack surface which is represented by parsing implementations of various cellular protocols inside the baseband operating system and this is the attack surface that we can reach from the air interface. In order to accomplish that, we need transceivers such as a software-defined radio or a mobile tester which is able to talk the specific cellular protocol that we're planning to attack. The simplest way to accomplish this is use some sort of a software-defined radio such as Atos Research RRP or BladeRF and install open source implementation of a base station such as OpenBTS or OpenBSC. The thing to note here is that the software-based implementations actually lag behind the development of technologies. The implementations of GSM base stations are very well established and popular such as OpenBTS and in fact when I tried to establish BTS with my usrp it was quite simple. For UMTS and LT there exist less number of software-based implementations and also there are more constraints on the hardware. For example, my model of the usrp does not support UMTS due to resource constraints and the most interesting thing here is that there it does not exist any software-based implementation of the CDMA that you can use to establish a base station. This is a pseudo-random diagram of one of the snapdragon chips. There exists a huge amount of various models of snapdragons. This one I have chosen pseudo-randomly when I was searching for some sort of visual diagram. Qualcomm used to include some high-level diagrams of the architecture in their marketing materials previously but it seems that they don't do this anymore. And this particular diagram is from a technical specification of a particular model 820. Also this particular model of snapdragon is a bit interesting because it is the first one that included the artificial intelligence agent which is also based on hexagon. For all purposes the main interests here are the processors. Majority of snapdragons include quite a long list of processors. There are at least four ARM-based cryo CPUs that actually run the Android operating system. Then there are the Adreno GPUs and then there are several hexagons. On the most recent models there is not just one hexagon processing unit but several of them and they are co-ed respectively to their purposes. Each one of them, each one of these hexagon cores is responsible for handling a specific functionality. For example, MDSP handles modem and runs the real-time operating system. The ADSP handles media and the CDSP handles compute. So the hexagons actually represent around one half of the processing power on modems and adrenes. There are two key points about the hexagon architecture from the hardware perspective. First of all, hexagon is specialized to parallel processing. And so the first concept is variable-sized instruction packets. It means that the several instructions can execute simultaneously in separate execution units. It also uses hardware multithreading for the same purposes. On the right side of the slide here is some example of the hexagon assembly. It is quite funny at times. These curly brackets represent the instructions that are executed simultaneously. And these instructions must be compatible in order to be able to use the distant processing slots. And then there is the funny.new notation which actually enables the instructions to use both the old and the new value of a particular register within the same instruction cycle. This provides quite a bit of optimization on the low level. For more information I can direct you to the hexagon specification of the programmer's reference manual which is available from the Qualcomm website. The concept of production fusing is quite common. As I said previously, it's a common practice from mobile device vendors to lock down the devices before they enter the market to prevent modifications and tinkering. And for the purposes of this locking down, there are several ways how this can be accomplished. Usually various advanced diagnostic and debugging functionalities are removed from either software or hardware on both. It is quite common that these functionalities are only removed from software while the hardware remains here. And in such a case, we will eventually the researchers will come up with their own software based implementation. All these functionalities as in case with some custom ios kernel debuggers for example. In case of Qualcomm, there was at some point a leaked internal memo which discusses what exactly they are doing for production fusing the devices. In addition to production fusing, in case of modern androids, the baseband runs within the trust zone. And on my implementation, it is already quite locked down. It uses a separate component. The baseband uses a separate component named the MBA. It stands from the Modem Basic Authenticator. And this entire thing is run by the subsystem of Android kernel named PIL, the peripheral image larger. You can open the source code and investigate how exactly it looks. And the purpose of the MBA is to authenticate the modern firmware so that you would not be able to inject some arbitrary commands into the modern firmware and flash it. This is another side of the hardening, so which makes it very difficult to inject any arbitrary code into the baseband. Basically, the only way to do this is through a software vulnerability. During this project, I have reverse engineered partially the hexagon model in firmware from my implementation, from my Nexus 6B. The process of reverse engineering is not very difficult. First of all, you need to download the firmware from the website, from the Google's website, in this case. Then you need to find the binary which corresponds to the modern firmware. This binary is actually a compound binary that must be divided into separate binaries that represent specific sections inside the firmware. And for that purpose, we can use the Unified Trustlet script. After you have split the baseband firmware into separate sections, you can load them into EDA Pro. There are several plugins available for EDA Pro that support hexagon. I have tried one of them. I think it was GSMK. And it works quite good for basic curves engineering purposes. Notably here is that some sections of the modern firmware are compressed and relocated at runtime, so you would not be able to reverse engineer them unless you can decompress them, which is also a bit of a challenge because the Co-ICOM uses some internal compression algorithm for that. For the reverse engineering, the main approach here is to get started with some root points. For example, because this is a real-time operating system, we know that it should have some task structures and task structures that we can locate. And from there, we can locate some interesting code. In case of hexagon, this is a bit nontrivial because, as I said, it doesn't have any lock strings. So even though you may locate something that looks like a taskstrat, but it's not clear which code does it actually represent. So the first step here is to apply the lock strings that were removed from the binary by Q-Shrink. I think the only way to do it is by using the mst underscore hash.txt file from the link sources. This file is not supposed to be available neither on the mobile devices nor in some open ecosystem. And after you have applied these lock strings, you will be able to rename some functions based on these lock strings. Because the lock strings often contain the names of the source file, source module, from which the code was built, so it creates an opportunity to understand what exactly this code is doing. Debugging was not completely unavailable in my case, and I realized that it would require some couple of months more work to make it work. And the only way, I think, and the best way is to create a software-based debugger similar to ModKit, the publication that I will be referencing in the references, based on software vulnerability in either the modem itself or in some authentic authenticator or in the trust zone, so that we can inject a software debugger callbacks into the baseband and connect it to the gdb-stop. This is how the part of the firmware looks that has lock strings stripped out. Here it already has some names applied using either script, so of course there was no such names, initially only the hashes. Each one of these hashes represent a lock string that you can take from the message hash file. And here is what you can get after you have applied the textual messages and renamed some functions. In this case, you would be able to find some hundreds of procedures that are directly related to the DIAC subsystem. And in a similar way, you can locate various subsystems related to over-the-air vectors as well. But unfortunately, majority of the OTA vectors are located in the segments that are not immediately available in the firmware, the ones that are compressed and reallocated. Meanwhile, I have tried many different things during this project. The things that definitely worked is building the MSM kernel. There is nothing special about this, just a regular cross build. Another commonly well-known offensive approach is firmware downgrades. When you take some old firmware that contains a well known security vulnerability and flash it and use the bug to create an exploit to achieve some additional functionality or introspection into the system, this part definitely works. Downgrades are trivial both on the entire firmware and the modem as well as the trust zone. I did try to build the Qualcomm firmware from the leaked source codes. I assigned just a few days to this task since it's not mission critical and I have run out of time. Probably was different version of source codes. But actually, this is not a critical project because building a leaked firmware is not directly relevant to finding a new box in the production firmware. So I just set it aside for some later investigation. I have also investigated the RAM dumps ecosystem a little bit on the software side at least and it seems that it's also fused quite reliably. This is when I remembered about the Qualcomm DAC. During the initial reconnaissance I stumbled on some white papers and slides that mentioned the Qualcomm diagnostic protocol. And it seemed like quite a powerful protocol specifically with respect to reconfiguring the baseband. So I decided to first of all to test it in case that it would actually provide some advanced introspection functionality and then probably to use it to use the protocol for enabling lock-dumps. Qualcomm DAC or QCDM is a proprietary protocol developed by Qualcomm with the purposes of advanced baseband software configuration and diagnostics. It is mostly aimed for OM developers, not for users. The Qualcomm DAC protocol consists of around 200 commands. It is in theory that some of them are quite powerful on paper such as download or mode and read write memory. Initially, the DAC was partially reverse engineered around 2010 and included in the open source project named Modem Manager. And then it was also exposed in a presentation at the Cal's Communication Progress or Cal's Communication Congress 2011 by Guillaume de Le Cren. I think this presentation popularized it and this is the one that introduced me to this protocol. Unfortunately, that presentation is not really relevant, majority of it to modern production phones, but it does provide a high-level overview and general expectation of what you will have to deal with. From the offensive perspective, the DAC protocol represents a local attack factor from the application processor to the baseband. A common scenario of how it can be useful is unlocking mobile phones which are locked to a particular mobile carrier. If we find a memory corruption vulnerability in DAC protocol, it may be possible to execute a call directly on the baseband and change some internal settings. This is usually accomplished historically through the 80 common handlers, but internal proprietary protocols are also very convenient for that. The second scenario, how the DAC offensive can be useful, is using it for ejecting software-based debugger. If you can find a bug in DAC that enables retry to capability on the baseband, you can inject some debugging hooks and eventually connect it to a GDB stop. So it enables to create a software-based debugger even when G-Tag is not available. What has changed in DAIAC in 10 years based on some cursory investigation that I did? First of all, the original publication mentioned Qualcomm baseband based on ARM and with RECS operating system. All modern Qualcomm basebands are based on Hexagon as a post-ARM and the RECS operating system was replaced with CURT, which I think still has some bits of RECS, but in general it's a different operating system. Majority of super powerful commands of DAIAC, such as download, remote and memory dried, were removed, at least on my device. And also it does not expose any immediately available interfaces such as USB channel. I hear that it's possible to enable the USB DAIAC channel by Edinson's special boot properties, but usually it's not, it wouldn't be available, it shouldn't be expected to be available on all devices. So these observations were based on my test device Nexus 6p and this should be around a medium level of hardening. More modern devices such as Google Pixels, the modern ones, should be expected to be even more hardened than that, especially on the Google side because they take hardening very seriously. As opposed to it, on the other side of the spectrum, if you think about some no-name modem sticks, these things can be more open and more easy to investigate. The DAIAC implementation architecture is relatively simple. This diagram is based roughly on the same diagram that I presented in the beginning of the talk. On the left side there is the Android kernel and on the right side there is the baseband operator system. DAIAC protocol, actually it works in both sides. It's not only commands that can be sent by the application processor to the baseband, but it's also the messages that can be sent by the baseband to the application processor. So DAIAC commands are not really commands, they're more like tokens that also can be used to encode messages. The green arrows on this slide represents an example of data flow originating from the baseband and going to the application processor. So obviously in case of commands, there would be a reverse data flow. The main entity inside the operating system, the baseband operating system responsible for DAIAC, is the DAIAC task. It has a separate task which handles specifically various operations related to the DAIAC protocol. The exchange of data between the DAIAC task and other tasks are done through the ring buffer. So for example, if some task needs to log something through the DAIAC, it will use specialized login APIs and it will in turn put login data into the ring buffer. The ring buffer will be drained either on timer or on a software-based interrupt from the caller. And at this point the data will be wrapped into DAIAC protocol and from there it will go to CIO task, the serial isle, which is responsible to sending the output to a specific interface. This is based on the baseband configuration. The main interface that I was building with is the shared memory which ends up in the DAIAC shared driver inside the Android kernel. So in case of sending the commands from the Android kernel to the baseband, it will be the reverse flow. First you will need to send some to craft the DAIAC protocol data, send it through the DAIAC shared driver, that will write to the shared memory interface. From there it will go to the specialized task in the baseband and eventually end up in the DAIAC task and potentially in other responsible tasks. On the Android side DAIAC is represented with the DAIAC device which is implemented with the DAIAC shared and DAIAC forward kernel drivers in the MSM kernel. The purpose of the DAIAC shared driver is to support the DAIAC interface. It is quite complex in code but functionally it's quite simple. It contains some basic minimum of DAIAC commands that enable configuration of the interface on the baseband side and then it would be able to multiplex the DAIAC channel to either USB or memory device. It also contains some IOC-TLs for configuration that can be accessed from the Android user land and finally DAIAC shares filters various DAIAC commands that it considers unnecessary. This is a bit important because when you will try to do some tests and send some arbitrary comments through the DAIAC interface you would be required to rebuild the DAIAC shared driver to remove this masking otherwise your commands will not make it to the baseband side. At the core the DAIAC shared driver is based on the SMD shared memory device interface which is a core interface specific to Qualcomm modem. So this is where DAIAC is, the DAIAC share is on the diagram. The DAIAC share driver itself is located in the application OS vendor specific drivers and then there is some shared memory implementation in the baseband that handles this and the DAIAC implementation itself. DAIAC share driver is quite complex in code but the functionality is quite simple. It does implement a handful of IOC-TLs that enables some configuration. I didn't check what exactly these IOC-TLs are responsible for. It exposes the dev-dial device which is available for it in the written. However by default you are not able to access the DAIAC channel based on for this device because in order to access it there is a DAIAC switch login function which switches the channel to that is used for DAIAC communications. On this screen there are several modes listed but in practice only two of them are supported the USB mode and the memory device mode. USB mode is the default so which is why if you just open the DAIAC device and try to read something from it it won't work. It's tied to USB and in order to reconfigure it to use the memory device you need to send a special IOC-TL code. Notice the procedure in a mask request validate which employs a quite strict filtering on the DAIAC commands that you try to send through this interface. So it filters out basically everything with the exception of some basic configuration requests. At the core DAIAC-TL driver uses the shared memory device to communicate with a baseband. The SMD implementation is quite complex. It exposes SMD read API which is used by DAIAC for reading the data from the shared memory. It's one of the APIs. The shared memory also operates on the obstruction of channels which are accessed through the API named SMD named open and edge. So you can notice here that there are some DAIAC specific channels that can be opened. Now let's take a look at the SMD implementation. This is a bit important because the shared memory device represents a part of the attack surface for escalation from the modem to the application processor. This is a very important attack surface because if you just achieve code execution on the baseband it's mostly useless because it cannot access the main operating system. And in order to make it useful you will need to chain them to create an exploit chain and add one more exploit based on a bug with the privilege escalation from the modem to the application processor. So shared memory device is one of the attack surfaces for this. The shared memory device is implemented as exposed by my region, exposed by the Qualcomm peripheral. The specialized MSM driver will map it and here it's the name SMM RAMFITS, the base of the shared memory region. The shared memory region operates on the concepts of entries and channels. So it's partitioned in different parts that can be accessed through the procedure SMM getEntry. And one of these entries is SMM channelEloctTBL which contains the list of available channels that can be opened. From there we can actually open the channels and use the shared memory interface. During this initial research project it wasn't my goal to research the entire Qualcomm ecosystem. So while I was preparing for this talk I have noticed some more interesting things in the source codes such as for example the specialized driver that handles GTAG my region which is presumably exposed by some Qualcomm system chips. In the drivers this is mostly used to read only and I suppose that it will not really work for writing but it's worth checking probably. And now finally let's look at the diacrotical itself. One of the first things that I noticed when researching the diacrotical is that it's actually used in a few places not only in libqcdm. A popular tool named SnubSnitch can enable protocol dumps, cellular protocol dumps on routed devices. And in order to accomplish this SnubSnitch sends an opaque blob of diacommands to the mobile device through the diac interface. This blob is not documented so it got me curious what exactly these commands are doing. But before we can look at the dump let's understand the protocol. The diac protocol consists of around 200s of commands or tokens. Some of them are documented in the open source but not all of them. So you can notice on the screenshots some of the commands are missing. And one of the missing commands is actually the token 92 in hexadecimal which represents a encoded hash log message. The command format is quite simple. The base primitive here is the diac token number 7e. It's not really a delimiter. It's a separate diac command 126. It's missing in the open source as you can see here. So the diac command is nested. The outer layer consists of this wrapper of 7e hexadecimal bytes. Then there is the main command. And then there is some variable land data that can contain even more subcommands. This entire thing is verified using the CRC and some bytes are escaped specifically as you can see on the snippet. One interesting thing about the diac protocol is that it supports subsystem extensions. Basically different subsystems in the basement can register their own diac system handlers, arbitrary ones. And there exists a special diac command number 75 which simply forwards instructs the diac system to forward this command to the respective subsystem. And then it will be parsed there. There exists quite a large number of subsystems. Not all of them are documented. And when I started investigating this I noticed that there actually exists a diac subsystem subsystem and debugging subsystem. The latter one immediately interested me because I was hoping that it would enable some more advanced introspection through this debugging subsystem. But it turned out that the debugging subsystem is quite simple. It only supported one command, inject crash. So you can send a special diac command that will inject crash into the basement. I will talk later about this. Now let's take a look at specific examples of the diac protocol. This is the annotated snippet of the block of commands from SnubSnitch. This block actually consists of three large logical parts. The first part is largely irrelevant. It's a bunch of commands that request various informations from the basement, such as timestamp, version and for build ID and so on. The second bunch of commands starts with a command number 73 hexadecimal. This is the command lock config. This is the command which enables protocol dumps and configures them. And third part of this block starts with the command number 70 hexadecimal. This is the command xmessage config. This is actually the command that is supposed to enable text-to-message logging, except that in the case of SnubSnitch, it disables all logging altogether. So how do actually cellular protocol dumps work? In order to enable the cellular protocol dumps, we need a diac command lock config number 73 hexadecimal. It is partially documented in the libqcdm. The structure of the packet would contain the code and the sub-comment that would be set mask in this case. It also needs an equipment ID which corresponds to the specific protocol that we want to dump. And finally the masks that are applied to filter some parts of the dump. This is relatively straightforward. And now the second command, the command xmessage config. This is the one which is supposed to enable text-to-message logs. The command format is undocumented, so let's take a closer look at it. The command consists of a sub-comment. In this case, it's sub-comment number four, the set mask. And then there are two 16-bit integers, SSID start and end. SSID is a subsystem ID, which is not the same as the diacet systems. And the last one is the mask. So subsystem IDs are used to filter the messages based on a specific subsystem, because there is a huge amount of subsystems in the basement. And if all of them start logging, this is a huge amount of data. So the app provides this capability to filter a little bit to a specific subsystems that you're interested in. The snippet of Python code here is an example how to enable text-to-message logging for all subsystems. You need to set the mask to all ones. And this is quite a lot of logging in my experience. Now, for parsing the incoming log messages, there are two types of diac tokens. Both of them are undocumented. The first one is a legacy message, number 79 hexadecimal. This is a simple ASCII-based message that arrives through the diac interface, so you can parse it quite straightforwardly. The second one is I-coded diac command log hash, it's number 92 hexadecimal. This is the token which encodes the log messages that contain only the hashes. This is the one that if you have the message.txt file, you can correspond the hash that was arrived through this command to the messages provided in the text file, and you can get the text-to-logs. On the lower part of this slide, there are two examples of hex sums for both commands. Both of them have a similar structure. First, there are four bytes that are essential. The first one is the command itself. And the third byte is quite interesting, is the number of arguments included. Next, there is a 64-bit value of timestamp. Next, there is the SSID value, 16-bit. Some line number. And I'm not sure what is the next argument. And finally, after that, there is either an ASCII-encoded log string in plain text or a hash of the log string. And optionally, there may be included some arguments. So in case of the first legacy command, the arguments are included before the log message. And in case of the second command, they are included after the md5 hash of the log message, at least in my version of this implementation. And this is a direct packet that enables you to inject a crash into the baseband, at least in theory. Because in my case, it did not work. And by not working, I mean that it did simply nothing to the baseband. Normally, I would expect that on production device, it should just reset the baseband. You will not get a crash dump or anything like that. It's just a reset. So I suppose that it still should be working on some other devices, so it's worth checking. There are a few types of crashes that you can request in this way. In order to accomplish this, I needed a very simple tool with basically two functions. First, direct is the access to the DIAC interface, ideally through some sort of Python shell. And second is the ability to read and parse data with advanced log strings. For that purpose, I wrote a simple framework that I named DIAC TOC, which is based directly on the DevDiAC interface in the Android kernel and with Python harness. So on the left side here is an example of some advanced parsing with some leaked values. And on the right side here is the example of the advanced message log, which includes the log strings that were extracted, that were stripped out from the firmware. The log is quite fun, as I expected to be. It has a lot of detailed data, such as, for example, GPS coordinates and various attempts of the baseband to connect to different channels. And I think it's quite useful for offensive research purposes. It even contains sometimes raw pointers, as you can notice on the screenshot. So in this project, my conclusion was that, indeed, I was reassured that it was the right choice, and hexagon seems to be quite a challenging target. And it would probably need several more months of work to even begin to do some serious offensive work. I also started to think about writing a software debugger, because it seems to be the most probably the most reliable way to achieve debugging introspection. And also I noticed some blank spaces in the field that may require future work. For Qualcomm hexagon specifically, there is a lot of things that can be done. For example, you can take a look at other Qualcomm proprietary diagnostic protocols, on which there are few, such as QMI, for example. I think they are lesser known than Di protocol. Then there is a requirement to create a full system emulation based on Camel, at least for some chips. And a big problem about the Decompiler, which is a major obstacle to any serious static analysis in the code. And for the offensive research, there are three large directions. First one is enabling debugging. There are different ways for that. For example, software-based debugging, or bypassing the JTAG fusing, on the other hand. Next, there are explorations of the over-the-air attack vectors. And the third one is escalation from the baseband to the application processor. These are the three large offensive research vectors. And for the basebands in general, there also exist some interesting directions of future work. First of all, the Osmo Combi B. It definitely deserves some update a little bit. It is the only one open source implementation of the baseband, and it is so outdated and it is based on some really obscure hardware. Another problem here is that there doesn't exist any software-based CDMA implementation. Thank you. Aliza, thank you very much for this nice talk. There are some questions from the audience. So basically the first one is a little bit of an icebreaker. Do you use a mobile phone and do you trust it? No, I don't really use a mobile phone. Only for Twitter. Does anyone still use mobile phones nowadays? Well, yeah. Okay. Another question concerns the other Qualcomm chips. Did you have a look at the Qualcomm Wi-Fi chipset? As I mentioned during the talk, I had only one month. It was like a short reconnaissance project, so I didn't really have time to investigate everything. I did notice that Qualcomm sucks have Wi-Fi chip, which is also based on hexagon. And more than that, it also shares some all the same low-level technical primitives. So it's definitely worth looking, but I investigated in details. Okay. Okay, thanks. Well, and there is also a pretty technical question here. So instead of having to go through the rigorous command checking for the DIAG car driver, wouldn't it be possible to end map a depth map into user space process and send over commands directly? So it ends a little bit on what the goal is. Okay. So it really depends on your previous background and your goals. The point here is that by default, the DIAG car ecosystem does not allow to send arbitrary DIAG commands. So either way, you will have to hack something. One way to hack this is to rebuild the DIAG car driver. So you will be able to send the commands directly through the DIAG interface. Another way would be to access the shared memory directly, for example. But I think it would be more complex because the Qualcomm shared memory implementation is quite complex. So I think that the easiest way would be actually to hack the DIAG car driver and use the DIAG interface voice. Okay. Yeah, thanks. Thanks. Well, there's one question which is really important. This level of hardening that I presented, I think is around medium level. So usually, production falls are even more hardened. If you take a look at things like Google Pixel 5 or the latest iPhones, they will be even better hardened than the one that I discussed. Okay. Yeah. Thanks. Thanks, then. So it doesn't look like we have any more questions left. Anyway, so if you want to get in contact with Alisa, no problem. There is the feedback of tap below your video now at the moment. Just drop your questions over there. And that's a way to get in touch with Alisa. I would say we are done for today for this session. Thank you very, very much, Alisa, for this really nice presentation once again. I've lost and I'll transfer now over to the Harold Mews show.
|
State-of-the-art report on Qualcomm DIAG diagnostic protocol research, its modern implementation as it appears in Hexagon basebands, advanced harnessing and reverse-engineering on modern off-the-shelf smartphones. Diag is a proprietary diagnostics and control protocol implemented in omnipresent Qualcomm Hexagon-based cellular modems, such as those built-in Snapdragon SoCs, and named so after the DIAG task in the baseband's RTOS that handles it. Diag presents an interesting non-OTA attack surface via a locally exposed interface channels to both the application processor OS and the USB endpoints, and advanced capabilities for controlling the baseband. Since Diag was first reverse-engineered around 2010, a lot has changed: mobile basebands are becoming increasingly security-hardened and production-fused, Hexagon architecture is gaining some serious advantages in the competition, and the Diag protocol itself was changed and locked down. Meanwhile, local attack surface in basebands is gaining importance, and so does baseband security and vulnerability research. In this talk I will present the state-of-the-art on Diag research, based on previously unpublished details about the inner workings of the Diag infrastracture that I reverse-engineered and harnessed for my research purposes, its modern use, and how we can exploit it to talk to the production-fused baseband chip on off-the-shelf modern phones such as Google Pixel, while understanding what exactly we are doing.
|
10.5446/52106 (DOI)
|
Hello everyone and welcome to my talk which is about Bluetooth exposure notification security. This talk could also be summarized as follows. So first of all exposure notifications as in the Google API are very secure and battery friendly and please just use the CoronaWorn app. This might be very confusing to most of you who are listening and normally since a while because I have been working on Bluetooth exploitation in the past and always told everyone like Bluetooth is insecure so you might wonder how does this align? Why am I now here telling you that Bluetooth exposure notifications are secure? So well it's a pandemic so instead of just criticizing solutions you should also look for solutions that work so instead of ranting work on something that helps everyone. So the first question that many people ask do we even need a smartphone app to fight the pandemic? What we can say well it's December exposure notifications were introduced in June and we still have Corona it still exists so it didn't help to fully fight them and probably it won't stop Corona. But let's look at this from another perspective. So first of all if you have an app and get the warnings we can do more accurate testing and that's very important because even now we are still low on tests we cannot test everyone. We only test people with symptoms and that's really an issue because people can infect other people prior to symptoms and they could even infect others without having symptoms so there are asymptomatic cases and these can be found with the CoronaWorn app. And also this can encourage manual contact tracing because official health authorities they are not able to make physical and manual contact tracing anymore so you need to ask your friends and so on if your app turns red and then you might find cases even if they forgot to tell you. Of course this all doesn't replace washing your hands, wearing a mask, physical distancing and so on so of course you still need to take these measures. But even if you just inform a few people every prevented infection actually saves lives so it's very important to have an app like this. And the next question is well is there something better than Bluetooth so if we want to look for a solution to build an app that supports exposure notifications and prevent infections how could we build it? So we actually need something that somehow measures proximity or location and in a smartphone we have various technologies that support that. So there is GPS, there is Bluetooth, there is LT and 5G, there is Wi-Fi, there is ITRA Whiteman you probably never heard of this, there is audio, there is a camera. You could use all of this. And the reason why you can use this to measure a distance or a direction is that on the physical layer you have a waveform and this waveform first of all has an amplitude and with a distance this amplitude gets lower so this also means that the signal strength is lower and you also have a phase that is changing with the distance. So these are all properties that you can measure on the physical layer on a raw waveform and some of this information is also sent to upper layers. And the most obvious one is the signal strength so it's a physical layer property that you can measure and it's also sent to the upper layers on MOS protocols for simple things like determining how strong a Wi-Fi is so that your device can actually pick the strongest Wi-Fi access point and so on. So the signal strength is very essential and sent to upper layers in MOS protocols. You could actually even do a precise distance measurement but for this you need the raw waveform and that's not supported by MOS chips. There are a few tips that can do that so for the precise distance measurement you actually need to send a signal and send it back and measure the round trip time of the signal and this is for example done to determine if your Apple Watch can unlock your MacBook. And the third option is that you can even measure a signal direction. This actually needs multiple antennas to do some sort of triangulation of the signal and this is not supported by MOS chips because you not just need the support in the chip but also the multiple antennas. But with this you can for example do things like on some iPhones you get some airdrop direction of the other iPhone and so on so you can have a direction shown on your device of a signal. When it comes to location the most obvious choice for many people or the intuitive choice would be GPS and GPS. Well the signals are sent by satellites and they orbit Earth at more than 20,000 kilometers so they are like very very distance and until the signal arrives on your smartphone there is a lot of attenuation. So even if there are like just a few buildings or if you are indoors or something but already a few buildings are sufficient to make the location imprecise and indoors it doesn't work at all. But indoors we have the highest risk of infections so GPS is not really helpful here. The next option would be signals from LTE 5G and so on. So here you have some senders and you actually change cells with your smartphone so here we have one cell and while you move you move to another cell and this is some movement that you do and you can measure the changes between the cells. And this actually has been used by phone providers in Germany to determine how effective the lockdown rules are. So with this you can actually see if people move more or less than prior to the pandemic and so on or how effective the rules are and so on. And these are not very precise statistics so this is nice to have those very broad statistics for a lot of people but it's not useful to determine who you were meeting. And another option is Wi-Fi but for Wi-Fi you have another issue so Wi-Fi depend on access points and so on and you can scan for access points and of course most smartphones also support that you spawn your own Wi-Fi access point and then you could scan for this but then you can no longer use your Wi-Fi because you can only join one Wi-Fi or spawn one a Wi-Fi access point and so on and this really doesn't work. There are some menu specific additions that would allow distance measurement but it's not in most devices it's not accessible through APIs and stuff like this so you cannot use Wi-Fi because of how it works and how it is built into smartphones. The best option for precise measurement is audio because even if you don't have access to the chip or any API what you have here is that you have a sender or like a speaker and microphone and they send a wave and you can measure this wave so even without any lower layer access to some firmware to some chip you can have this very precise measurement but here the issue is that it means that you need access to the microphone so an app would need to run in foreground with a microphone all the time, the strain's battery and even worse it means that you have a permanent spine in your pocket so you have a governmental app that would listen to your microphone all the time and many people don't want this. Then there is an option that you probably have never heard of, ultra wideband so that's coming to the newest generation of iPhones and so far it's not used for many features it's just something that can also determine the direction of the signal because it's using multiple antennas and so it can show you in which direction and other devices but since it's only in a few devices it's nothing that's useful for the general public so it's a nice feature but we are just a few years too early for it. And of course you could use the camera, similar to the microphones you could of course record everything with the camera but that's probably not the solution that you want so more likely you could actually use it to lock into location so you scan a QR code and then register that you are in a restaurant or that you are meeting friends so this is what the camera ideally should be used for and that would be a nice addition to the warning apps. And what's left, well there is Bluetooth. Bluetooth actually sends signals at 2.4 GHz like Wi-Fi and 2.4 GHz has a very big issue because it's attenuated by water and humans are 60% water so the measurement is a bit imprecise but I mean 40% of the human are stupidity and that's also an issue because humans are not using the Corona One app at all, that's even worse. And well what else is there? The next issue is that the chips vary and the antenna position varies and so on so you actually have the issue that the measurements are not the same on each smartphone model so it might be the same signal but different measurement. And for this first issue with the different measurements of the same signal we already have something that's built into the API, the official Google Apple API and they include the transmit power per device model and so on which is a slight risk for privacy but overall it has a very good compensation here so they said that this is better to use and have a little bit less privacy. Something else that you could use are active data connections over time to track the average signal strength but that's worse because the active data connection means that you have data that's being exchanged between two devices and this is a risk for exploitation so exploits need some exchange of data and this would be a risk for security. And another thing that you could add is the accelerometer so depending on how you hold your smartphone you can actually determine by the accelerometer if it's in your hand, in your pocket and so on and then compensate for this in the measurements but the issue here is that the accelerometer also is able to determine if you are running, walking, how many steps you are walking and so on so it's a huge privacy impact to access the accelerometer. And last but not least there is the angle of arrival and that's something that's supported since Bluetooth 5.1 but it's an optional feature in the Bluetooth specification so no smartphone has it yet so you cannot actually do a specific measurement of the angle of another device so that's pretty sad. And well so everything that improves those measurements on Bluetooth is always at the cost of privacy, security and battery life so just considering how it's currently done in the API it's pretty good. And to sum this technology round a bit up well so even though Bluetooth is not perfect Bluetooth low energy is really the best technology that we have in all smartphones or all recent smartphones and with this we can build exposure notifications so yeah even though Bluetooth might not be optimal it is still a winner. Yeah yeah I know Bluetooth is dangerous and so on so let's discuss this a little bit. So actually during 2020 a lot of newspapers were trying to reach me and said like hey Jiska you have been working on Bluetooth security so please please tell us how bad the current state of Bluetooth security is and I was like yeah I don't really want to tell this because you know Bluetooth is a wireless protocol that transmits data and that has certain risks but so has everything else so yeah and then they didn't print this I mean it's really not a nice headline to print Bluetooth is a wireless protocol that transmits data. Yeah and then they were like you know I'm not using the exposure notifications because I'm using an outdated smartphone that does no longer receive security updates and then I'm like yeah so I mean no security updates that's not just an issue for Bluetooth that's an issue for everything like if you browse unicorn pictures on the internet or receive mails or I don't know what maybe just get a new smartphone if you are very concerned about the data on your smartphone and also something that you shouldn't do to a journalist when they ask you this is tell them so you have an outdated smartphone are you just calling from a number that belongs to this smartphone but yeah so just don't discuss this because it's a very general issue that's not specific to Bluetooth. And well something that's a bit more specific to Bluetooth is that well you can build warms with this so a device can be a master or a slave in the Bluetooth terminology and so a master can connect to slaves and a smartphone can switch roles which means that it can receive a warm and then become a master and transmit a warm to another slave and the slave then becomes a master and so on so it's warmable. But to have a very good warm you would actually need an exploit that runs on a recent iOS and the recent Android version and it is very reliable so it should be a very good exploit on all platforms and if someone had such an exploit they would probably not use it to disturb export notifications but they would sell it for the price that is currently available on the market the highest price of course because probably you don't have that many ethical concerns instead of reporting it but yeah so that would be more of the scenario here. And also people say I turn my Bluetooth off because Bluetooth trains battery but you know Bluetooth doesn't train a lot of battery especially Bluetooth low energy so Bluetooth low energy is a technology that can power even small devices like item finders of this size so if you have a battery button cell of this size and then have like a device of slightly larger like a Bluetooth finder of this size they can run with this button cell for a year and you charge your smartphone daily your smartphone has much more battery capacity than just one button cell so yeah go for it it really doesn't train battery especially because you also have combo chips and if you have Wi-Fi enabled then Bluetooth really doesn't add anything to this. Another argument might be that Google and Apple are always stealing our data and if they now do the contact tracing this means that they are stealing data but in fact the exposure notification API was renamed because it really is just about exposure notifications it's not about a contact log and this means that this API is not collecting any data about your contact trace and well it's good and bad in terms of they are preventing a centralized collection by everyone so not just by health authorities they just prevent it by everyone and including themselves so there is just no data collection so you cannot complain about this. So yeah after saying this you might ask me if I had now just said like that Bluetooth is not dangerous at all but you know Bluetooth is still a wireless protocol that transmits data so yeah maybe maybe it's still somewhat dangerous. So if you look at the Apple ecosystem what you have is a feature set called continuity framework and this does a lot of things like some copy paste, airdrop, handoff whatsoever so data that's being exchanged and all of this continuity part here it all works with BLE advertisements and then actually Wi-Fi or AWDL for the data transfer. So you have a lot of BLE advertisements going on if you already are using iOS and other Apple devices and exposure notifications they really are just a tiny additional thing here so it's just yet another feature that's based on BLE advertisements so it's nothing that adds a lot and you also need to know that the exposure notifications don't have a lot of logic so you just receive them you don't answer to them and so on so it's really a very harmless feature on top compared to the other services running. Now let's look into an Android example which is a recent Bluetooth exploit so bugs like this exist and this is not just specific to Android it can also be on iOS and it's also not specific to Bluetooth because we have bugs all the time so if you are using software if you are not updating it there might be bugs in it or there might also be bugs in it that have not been seen for a while so despite they should have been fixed and so on so this just exists whenever you're using software and also those bugs often depend on certain hardware and software versions so for example this exploit only works on Android 9 and older because it requires a very specific implementation of mem copy because the mem copy is called with a length argument of minus 2 and it has different behavior on different systems and last but not least but this exploit actually needs to run for something like 2 minutes because you need to bypass ASNR over the air so you need to be in proximity of a vulnerable device for a while if you are an attacker. And now people say yeah but it's special because this kind of bug it's formable yeah that's true so you could build a Bluetooth form with this but what does it look like so first of all the devices are losing connections so you don't have a full mesh but you have like some connections here and there and you have a warm spreading somewhere and so on and so forth but the attacker actually needs some control servers so no matter what the attacker wants to achieve like steal data or do some Bitcoin mining or something in the end you need to have some feedback and control server in the internet to have a communication or also if something goes wrong with your exploit to stop it or something you need this back channel despite you have a wireless channel because your wireless channel is not permanent and the next challenge here is that your exploit needs to be very reliable so it means that if you actually produce a crash and if you have a warm that spreads very fast and that spreads a lot then you have the problem that if it's not 100% reliable you would get crashes and that I reported to Apple or to Google and this is an issue because once a bug is detected it means that Apple and Google will update their operating system and your bug is gone so all your exploit development was just for nothing your exploit is gone and well that actually means that if an attacker would want to build a warm they would probably just use some outdated bug and as I said bugs happen so they are there every few months or years it depends you would have a bug that works for a warm and then the attacker does not have the risk of losing a very like unique bug that is worth a lot of money if they use an old one but it also means that all the devices that get updates are safe from this form so it really depends on what the attacker wants to do. So what I think is more likely so instead of building a warm what are attack scenarios well if you think about Bluetooth exploits the warm needs a lot of reliability and so on and you have this risk of losing the exploit so probably the attacks are a bit more targeted and require the physical proximity of those targets so stuff that I would say is very realistic would be like if you have some airport security check or if like an attacker is close to certain buildings like company buildings or your home or something to steal certain secrets or also from the government if there are protests and the government does not want them or wants like identities of the protesters or something this would be an option but the warm as I said is a bit that they not so plausible. And the next thing is so exploit development means that if you want to develop an exploit for recent iOS and Android then this is a lot of work and well your enemy might be able to afford this and in this case they can also use it multiple times so as long as the bug does not leak and is not fixed they can reuse the exploit so it's a one time development cost but if you think you have enemies like this then probably use a separate smartphone for exposure notifications and keep your smartphone up to date and so on or if you're very very very afraid of attacks then maybe just don't use a smartphone because Bluetooth is really not the only way to hijack your smartphone so you could still be attacked just by messengers, browsers, other wireless technologies like LTE and so on so it's just a risk that you have and that happens and that's not specific to Bluetooth. Anyway let's go to a few implementation specific details so if you want to understand the exploitation background and why I think that the Bluetooth exposure notification API as it is is very secure. So first of all the API does Bluetooth address randomization so that means these addresses are randomized and not connectable and you cannot connect to them as an attacker and also there is no feedback channel because of this non-connectable property and it means that usually your smartphone is configured in a way that it doesn't announce any connectable addresses it only has this random addresses and this is really hard for exploitation so you need to know the correct address of a smartphone to send and exploit to it and it's not sent over the air so you need to decode packets for example if you are in parallel listening to music or something you could extract the address from this but it's very high to achieve this. And another pride is that especially Apple is tremendously restricting their Bluetooth interfaces so smartphone apps cannot use Bluetooth for arbitrary features that are available within the Bluetooth specification and this means that this is good for your privacy so for example it's hard to build something like a spy in your pocket on iOS because there it's hard to run an app in the background that does all the tracking via Bluetooth and so on. And the other way around it means that if there are apps that do exposure notifications or contact tracing and they are not based on the official API actually these apps are very exploitable because they use active connections they run in the foreground they actually are logging stuff that should not be logged so probably don't do this and don't trust those apps that are not using the API. Another issue might be privacy so first of all there is a random identifier that stays the same for a while but as I said on iOS you have the continuity framework and it does the same so at least on Apple devices this really doesn't make a big difference and if you think a bit broader than Apple well first of all there is the signal strength and if you compare this like to other technologies that are wireless like Wi-Fi and LTE there you also have signals with the signal strength and maybe changing address and so on you can always triangulate signals so if you don't want to be tracked you would also need to disable Wi-Fi and LTE. Another very important part about the security assumptions is the server infrastructure so there are two types of server infrastructure and first of all you have one for the centralized approach which is also known as contact tracing and in the centralized approach the server knows everything so the server knows who was in contact with whom and for how long so for example Alice met Bob for 15 minutes but also Alice met 10 other people on Tuesday or something so that you actually have a record log of whom at whom and now the server can actually tell specific people after someone got a positive test and so on for how long this was the server can still send some of the information it sent out to specific persons but it has a lot of information internally so this means if this server is run by some governmental health authority that all the users have to trust this authority a lot with their contact history. And the other approach are decentralized exposure notifications so the server has a list of pseudonyms and positively tested users but these are just pseudonyms not the exact times and exposures and everyone can download this list and compare it to a local list so you just have a local list on your smartphone who you met and you can compare the list with pseudonyms that are on the server and this means that everyone could even opt out to publish these pseudonyms and you don't share your list to anyone. So why is this good or bad? Well the governmental health authorities don't get any contact tracing info in the decentralized approach and this might be an issue because this means that the government does not have any statistics about spreaders or effectiveness of the app. We cannot measure how much the app actually helped. We cannot measure how many infections were prevented by telling people to go into queratin or to get a test and so on. But on the other hand it means nobody is getting this data so neither Apple nor Google nor government nobody is getting the data and there is no gain from attacking the servers because they don't have any private information and there's also no privacy impact from using the app and in the end if you get a positive test even then you can choose to not share the result if you think it's an issue to disclose your pseudonyms and I mean ideally many people should share the result but you don't have to. And I want to show you a few attacks on exposure notifications because some people said like exposure notifications are very very very insecure so let's look into a text that have been publicly discussed on those exposure notifications as they are implemented now. And please note that many of these attacks are not specific to Bluetooth but they are specific to everything that's somehow wireless and somehow a notification so let's take a look. So the time machine attack. This one is quite interesting the assumption here is that someone can change the time on your smartphone and then replay outdated tokens so that you would think like you met pseudonyms in the past that were already known to be tested positive and because your smartphone also is in the past it would accept those tokens and lock them and then if you compare them to the server later on you think that you were positive in contact with positive users and so on. But please note that spoofing time is very very hard so if someone can spoof time it means they can also break other things like TLS and I mean if I had a time machine then I would just travel back to a time prior to 2020 or something instead of faking a few exposure notifications. The next attack is the wormhole attack so imagine that like this one would be one shopping center then another shopping center and maybe up there a police station or something like this how does that work. Well if you wormhole them and put them together then the chance of getting positive exposure notification in the end is very high. So you increase the chance of having positive exposure and this exposure of course was not real so it's forwarded exposure and because of this in the worst case you would do more physical distancing more testing maybe also start to distrust the app a little bit but it doesn't really harm the overall system so the amount of record on the server with the positive test is not increased because only confirmed positive cases are uploaded to the pseudonym list and those who are just here and get a notification are not uploaded and also to have such deployment so to have this wormhole and the wormhole that scales you need a lot of devices that forward the notifications and in public spaces so it's not that easy to implement this. The last attack is the identity tracking attack so let's say you have those pseudonyms the pseudonyms they change over time and you are moving through a city and there are multiple devices that are observing your pseudonym changes so of course you can then start tracking users. This again requires a very large scale installation and the issue is also that if you are scared of this type of attack then you would also need to disable Wi-Fi and LTE and so on because you can always triangulate signals so ideally if you don't want to be tracked turn off wireless technologies this is really not specific to Bluetooth at all. So yeah all those attacks they are valid but to deploy them like to have records of export notifications that you can then replay with time travel or a wormhole or also some tracing of IDs you really need a large scale installation of something like Raspberry Pi throughout a city and many many many devices so this would also work in any other wireless ecosystem but okay. But if you would roll out such an installation also keep in mind that you could instead just deploy something like a lot of devices that have microphones or cameras and Wi-Fi and so on and track a lot of other things this needs to happen in public spaces so I don't know next to bus stations shopping centers and so on and well if you have such an installation then really just tampering with exposure notifications of Bluetooth is not your main concern. The sad reality might actually be that we already have a lot of surveillance everywhere so we have a lot of cameras in public spaces so this is not the part that I would be afraid of I mean I would be afraid of public surveillance obviously but not about Bluetooth surveillance in particular. So let me conclude my talk. The BLE advertisements are really the most suitable technology that we have in a smartphone to implement export notifications and they are available on recent smartphones on iOS and Android. They are very secure, privacy preserving, battery friendly and also scalable and keep in mind every preventive infection saves lives and also prevents long-term disease so this is really a thing to use even if it does not work 100%. And with this let's start the Q&A and discuss whatever you like, Bluetooth, volume and so on. Thanks for listening. Alright thank you Giske for your talk. I hope you have time for some Q&A. Yeah let's go. Awesome. Oh I think that was all this internet connection was it? So I might just start on my own until he reconnects. Ah yeah yeah. I'm so sorry. Why would the Anglo-French rival be interesting for an attacker? Not for an attacker but like from the technology it would be interesting to have it in Bluetooth because then you could do some direction estimation and I mean if people move you could also get a direction and maybe location via this or assist this but yeah I mean it's not in any chip yet except from some evaluation kits. Alright thank you. Is there any studies that prove or disprove the efficiency of contract tracing apps in general like the use case? I mean the issue is because we have the exposure notification framework that we do not have any statistics so the good and the bad thing about the privacy is that we cannot have such a study except from if people would provide their data in I don't know a questionnaire or something but at least I know people who have been warned by the app and got the test and so on so it seems to work from the people I know. And I mean every life that it saves counts in the end so I mean nothing is perfect right? True very much. Thanks for the answer. Let's use a triple five would like to know why would using the Accelerometer be a data privacy issue isn't it up to how the sensor data is processed? EA stays on the device is not stores and so on and so forth. Yeah of course but I mean once you give that permission to the app you need to trust the app which is first of all a binary that you download I mean maybe it's open source like our German app but you never know and then it could process this data and actually from an Accelerometer there have been studies that you not can just like do step counts but even stuff like someone turned to the left to the right and from this if you have an overlay to a map you can even reconstruct the path that you moved through a city for example. So that's a big issue I think. All right thank you. There's one more question here. If any of the theoretical or possible exploits being tried in practice to the best of your knowledge that is. To my knowledge not I mean I think it would only be visible once someone does such an attack large scale and then they need to manage to have a large deployment of such an attack without being caught so yeah nobody did this so far. Yeah okay make sense and there's one person who had a comment they pointed out that there is an open implementation for framework if a user wants to go without Google or Apple at all and still take part in exposure notifications I think it's fairly recent development. Yeah exactly so I had my slides already finished when that came out so it's on the after its store and it uses I think yeah just like an open source implementation so that you can now use it even on your lineage phone for example. Okay thank you. Okay any further questions? Yeah one last question just bumped in here. Sorry for staring over there. Has anyone tried to crack the verification server by brute force the person asking did it back on the envelope calculation back in time and it seemed possible to just try out all possible tele-tons at some point? All possible tele-tons I mean so that would be like really a brute force that you see in the backend right so that's something that you can just rate limit and that's probably the only thing that you can prove for us because all the comparison and so on is done locally on the smartphone so yeah the tons would be the only weak point in the system and if someone tries to prove force all tons that yeah it's a very obvious attack. Yeah makes sense. Alright thanks a lot for your talk thank you for your patience in answering all the questions I believe that's actually all time slot exactly and with that I hand it over to you. Hand over to the news show from Dublin this time. Thanks a lot.
|
Bluetooth is still the best technology we have in a smartphone to implement exposure notifications. It is safe to use the Corona-Warn-App. Fight me! ;) Wait, what, did Jiska just submit a talk claiming that Bluetooth is secure?! Is this just another 2020 plot twist? No, it's not. Assuming that we need an app that enables exposure notifications based on distance measurements, Bluetooth is the best trade-off. Audio would be more accurate but requires permanent access to the microphone. GPS does not work indoors, Wi-Fi and LTE chips are less accessible through smartphone APIs, so we're left with Bluetooth. And Bluetooth LE Advertisements are actually a great choice for such a protocol, further reducing exploitability. As someone who was involved in finding multiple Bluetooth security issues within chips and operating systems, Jiska should be more afraid of Bluetooth, you might think. However, attacking Bluetooth on an up-to-date smartphone with recent chips is very complex and requires physical proximity. Those using outdated smartphones face similar risks when browsing the Internet, without the physical proximity requirement. There are other issues within the CWA, such as missing awareness of places like restaurants and public transport, and a health system that lacks fast test reports. We should care about real problems instead of claiming security issues that barely have an impact on average users.
|
10.5446/52395 (DOI)
|
Are you sitting comfortably? Well, then I shall begin. So, Calabro Online, we talked about this, I think, last year, and I tried to persuade you that Calabro Online built on this awesome LibreOffice technology, all of that richness, security, interoperability, all of those good collaborative features, made it just ideal to integrate with your content management system or your document management flow, and to allow people to collaborate around documents. So I went on offline and I showed you how to integrate it and you can go back and, you know, FOSDEM provided this wonderful depth of wealth of talks and information across many years. So do go and look at that. I'm not going to talk about that this time. I'm going to talk a little bit about what's improved and what's changed since that, to make it, I guess, more relevant for you. So one of the things we've done is tried to make everything easier to integrate with. So we're providing now demo servers all around the world to help people write integrations and test them and get them working with their software easily. So obviously, you want to warn people before we do that, so in NextCloud, you know, it's say you're going to send your document to this other service and, of course, for privacy tools it's not a great, you know, approach, so we want to make sure people really buy in and agree it's just a demo in Watermark to make that clear. And that's really easy, you know, you download some JSON as a whole load of servers there that you can contact and provide for the user to select one of them. For partners, we would then typically filter that, so, you know, some people want to provide an end cloud server but not a NextCloud, you know, so it's presented in the right places for the services that you're providing. And we provide those all around the world so you can be sure you've got a demo server near you, which is great, but of course sometimes people don't have publicly-routable IP addresses, so we can't ask them to get the document to give it back to them and share it so they can connect out but we can't connect in to get the data. So that's a bit of annoyance. So of course you can install the thing on-premise locally yourself, obviously, docker images, packages, the works, the absolute works, but we've done a whole load of work to try and make this even easier, particularly integrating with these three-tier web products like NextCloud or ownCloud and many others, where we really can't persist, you know, the lifecycle of PHP is, you know, for one request and, you know, it's timed out and killed. And so we've produced through a series of quite interesting tweaks and I have another talk at FosterM about them if you're curious about that. I think that it's actually built in and works well. Please do upgrade quickly. This essentially uses a keep-alive HTTPS socket. So instead of having to reconnect over HTTPS every time, you know, all modern web servers support keep alive, it's just much, much more efficient and doesn't, you know, involve two extra round trips just to set up your SSL, TLS stuff. So that means then actually we can treat an HTTPS keep-alive socket much like a web socket and get rather similar latency and performance out of that. And it's kind of cool. You can see the exponential drop-off we use here when things aren't happening, but that works really well and surprisingly, surprisingly so. I guess for other kinds of solution, if you're in Java, you know, it can also pay perhaps to have a little proxying web socket implementation which would go to a back-end on your device. So that's an interesting way of getting the thing out, bundling it in a way that a user can just download a PHP thing with one click in their PHP app and have something working, which can be useful for that mass market consumer home user environment with several caveats around that. Of course, it's not suitable for large-scale enterprise use and it's not suitable really for, you know, aggressive other users to come on and use alongside you, you know, some of the containment bits are gone. Anyhow, we've also shipped a number on 9.640 which has a whole load of new things, obviously numbers, you know, numbers have changed. So there's a bit of a version leap there. You can see, you can see, I have that sort of thing. Just to make it more clear where it's coming from, and one of the big things there is providing its new notebook bar choice. So I mean, many people like menus and toolbars and, you know, if you look at what Google provide, obviously their user experience is very much from that stable. But also we want to provide this, what we call a notebook bar, which perhaps is more familiar to a different set of people. So you know, maybe, you know, more contextual, tabbed user experience there. Of course we have the sidebar as well, which provides, you know, also a very rich experience in many ways. So, you know, lots of different ways for people to access tools in a way that's most familiar to them. And, you know, into a collapse of the way, if you don't want it, just click again on your home tab and you'll end up with just this little menu. So you've got as much space as you like to work with your documents in your browser. So yes, classic mode is retained. It's configurable and it's defaulted in some places. You can configure the default for all of your users there. We're currently working, actually one-on-one working on contributing some changes to that so you can put a post message through the frame. Actually, I think we have that already. Yeah. For making it possible to configure that at runtime in the UI defaults. But more of that later. Freeze rows are very useful. They're for spreadsheets, very much better. PDF search, so really nice to be able to not only load PDFs, securely watermarked, search through them, but also annotate them and collaborate around those annotations. So that's often used. Improved smart art, graphics and so on. And actually, yeah, the PDF annotation stuff is often used, I think, for customers who want to provide a document that they want to show to people, but they don't really want them to be able to edit it or fiddle with it or even copy it so we can turn off copy and download and that sort of thing. It's quite a nice, highly secure mode there, we call it secure view, we're in the water market as well. But of course, still allow you to comment on it and interact with Adam, provide feedback, maybe from the sales people to research and development or so on. So having gone six, four out in November, there we go, lots of rapid fire releases, lots of little features here. So being able to edit your chart titles and subtitles in the sidebar then and of course on mobile phones too because we take the sidebar here and we wrap that functionality to use in the palette menu on your mobile phone. It's kind of cool. Improved sheet movement and calc spreadsheets and some of those more powerful renaming, copying, moving around functionality. Finally we shipped pivot table support which is kind of cool so you can select your data. We could always calculate pivots and show you the results and I believe refresh them so if you change the input data you could refresh the existing pivot. But being able to customize and tweak the pivot table layout and make sure it's exactly what you want and create new ones and so on is of course just very, very much more powerful for people. Riding names, managing those so it makes things much easier and of course print ranges and getting the scope of those things right. We've always been able to calculate with those and load and save them but nice again to be able to do that. And Calc has also had a whole load of statistical tools brought to it recently so just very, very large number of analysis and helpful tools there, covariance, correlation, moving average, smoothing or lots of these things that people want to do and there you can see the statistics tab. Of course there's a whole load of things you can't see. So you know 20% faster in terms of CPU time reduction and you know actually quicker getting tiles to people, rendering time, better interactive editing of cells, so improved jails, creation we stuff all documents in jails to stop the information leaking to other people, better water marking performance, better performance in press, pre-caching slides as you flip around so it's rendering either side of where you're presenting, on-demands, thumbnail fetching, PDF rendering, lots and lots and lots of things. Huge investment in Cyprus automated test coverage, so many, many, many more automated tests for well just huge scads of functionality that we ship just improving you know the quality of every test that you write. Helping to debug cluster problems that some of our users suffer when they don't quite understand and just catching and detecting that something is wrong with the cluster and showing what's going on there. Lots of mobile things, so you know we've been working to get mobile in your pocket, I think I showed some of that stuff last year, but there's a whole load of new things there, so bringing the notebook bar to tablets and getting the sidebar out of the way because I think that really helps to just provide more visibility there, so on iOS and Android start of the dark mode for the shell, at least the document itself is still waiting. I think this is significantly higher now, approaching a million installs across iOS and Android. A Chromebook version, so obviously there's a lot of interest in education and I collaborate online and so being able to then integrate with your Chromebook and actually provide a full feature rich Office productivity suite on ChromeOS that works really well natively as that Android application so you can take your Chromebook offline and walk around and still edit your documents with just a beautiful, modern, rich editor. You notice this is in the Google Classic menu toolbar mode. So from an integration perspective we've done a whole load more too to try and make things prettier for people. We have a beautiful style guide now showing the ultimate in colors and taste and iconography and however things should be, obviously beautiful. But of course other people want to turn things on and off and tweak that and configure that for their own particular setup. So when you embed us we have an iframe and that's a post and you send your document credentials through there as normal. We also then allow various configurations to be pushed through there so you can select which kind of menuing or toolbar style you want and various pieces around the document there. So you're just enabling and setting various defaults there. It's just very easy and you don't need to configure servers or fiddle around or just adapt to the environment it's in. And that's interesting because I guess the same kind of online instance can then serve multiple different integrations with different preferences. So you know your own cloud and your next cloud and you know it can be currently serving a Moodle and a Matamaste and all of these different pieces and Alfrasco, you know, the same server can be embedded in all of those and look different in each case which is nice. It should appear native. Perhaps more important on that is then the CSS variables for getting these colors right and the theming and just matching what's there not just with pre-canned CSS but of course dynamically because of course many products and integrations have their own theming support. So we need to chain off that and make sure we're getting it right. So just some examples of the things there that you know you can change and set to your preferred colors and so on and trying to adapt to and incorporate all those and then you can see a couple of different themes there and the impact that can make. Every little helps, you know, just making it seem more like a native, a beautiful part of your tool. It's going to be very, very helpful. And here's just an example of that I guess for NextCloud. So you know the before and you can see your sidebar and when you can't see the status bar but some of the colors here and then afterwards, you know, getting the ruler, getting your sidebar, changing the background, highlight colors are looking nicer and so just looking more like a well in this case the NextCloud native tool and so there's more to be done there but we're getting there. So what's next? Well, our red map we publish is really for our partners and customers but you know and we love to get input from them and we love to see what customers' pain points are, where the squeaky problem is, the floorboard is, where the rubbing shoe pieces and make all of that make it nice. But all of our K-developments done in public of course and we now have minutes of our public weekly calls. Do you get involved? That goes in our forum. We can ask questions, it would be great to have people there if they're interested or you have problems or you want to get help on integrations and that's all minute and we attract there the various things, many of the various things in progress. So some of the bits you'll see there will be a canvas based rewrite of our front end, so previously it's using a map widget to make that just slicker and crisper and nicer and having a simpler code and more type script there so we have a more typed code. We're also moving away from the dialogues that a server side rendered so we've done that with the notebook bar, we're increasingly doing that with individual dialogues to make sure it's all femal on the client side so all of that integration works really nicely and it's slicker and more beautiful and easier to design for. A-synchronous auto-saving, so that auto-saves happen in the background as you're typing or much more and so that if we have an integration that's throttling or busy or stalled or doing something, it's gone away to play with re-indexing its database or something that we don't then block the user's editing, they're auto-save continues in the background and we're doing some significant work to help with that. What's next? Well, your integration. Here's a middle one, you can see the beauty there of your essay assignment happening inside your Moodle and all very smooth flow there and that's working nicely. Another thing that's happened in the last few months is just the relationship between Calabra and LibreOffice has been changing and evolving. There are all these awesome group of people, there are many more than this but here are some of the core people who could get to our last conference, I guess our last in-person conference in Almeria. There's some great guys there, we don't always agree but when it comes to online which is the piece really that Calabra has been the spearhead of, we're really doing all of the heavy lifting and all of the work there and that's great, 50 million pulls from that. But we've recently moved all of our developments and all of the developers there have moved to their work to GitHub and a Calabra online branded piece of work. Now of course this is only really 2% of the whole combination of LibreOffice and Calabra online is the Calabra online code so the vast majority of the work and the cool stuff, the interoperability, the core rendering functionality and so on is still done in LibreOffice and totally shared but having a clear brand story around driving return to the investment we put in is really important for us. So there's been a long running conflict around this inside the Document Foundation and we believe that the best way to end that and get rid of it is to simply move that piece to Calabra and Stewardship and so we've done that and we're moving the warnings for you, reminders for users, this warnings on limits so there's no longer there in our development edition. Moving to GitHub gives us access to more developers and we've seen more developers getting involved and contributing which is very encouraging. It also gives us an incentive to invest in the community as a positive wonderful thing that is great to be worked with so that's cool. And it allows us to of course continue to support and work on LibreOffice as we realise the lead flow from that and of course it's very familiar to everyone that's working in this space. It's hard to think of a vendor neutral non-profit that can still preserve that brand building and lead flow through to actually converting people to paying and paying for services and support and making it clear where that goes. So there's obvious simplicity and elegance to having a very clear flow of users through those projects, obvious place to go to for support and services, a clear credit in the product name and no complicated structures and agreements and discussions and uncertainty that just gives a clear future there. So when it comes to a floss company, the only assets they have really are the cash in the bank and their brand, of course their staff, the relationships they have. But you know, return investment is really important and marketing is key. So you know, we're marketing in a different way, otherwise the people are the same, the community is the same and we're doing good stuff both at LibreOffice and at Calabra and in that community. And it's of course not just Calabra, there's Alatropia I guess now and one-on-one and various other people involved with us too, which is cool and we want to grow a diverse community there, but we really need to get that right around a return on investment and get that working nicely. So here are some of the team, two-thirds of it now, it's been a while. Making Open Source Rock is our mission, Calabra, and well there you go, there's a Calabra plug for you. So thank you, thank you for coming to the talk, thanks for listening. I hope there's something interesting in there that piqued your interest and that you perhaps would like to know more about, so I'll take questions in a minute. And otherwise, you know, I email me, come and tell me where we're going wrong, come and tell me where we're going right, I'd love to get your feedback and see how we can improve, make things better so that Calabra Online becomes just the most perfect thing for your users, for your integration and to build a business together. So thank you.
|
The Collabora Online code-base can bring the power of LibreOffice into an iframe inside your web app. Come and hear how we've made that power even prettier and more functional for your user's delectation. Collabora Online is super-easy to integrate with content management applications, come see some pictures of many successful integrations. Understand how the product works, scales and can bring a very easy to deploy bundle of interoperable document beauty to your application. See the internals of Collabora Online, and checkout how you can get involved with building, debugging, and developing it. Hear about our growing community, and all the changes we've done to make life better for our users, integrators and contributors in the last year.
|
10.5446/52396 (DOI)
|
Hi, welcome to my talk on designing human-centric next generation internet. Here's Boston in 2021. My name is Jens Schienkhoiser. Before I start, let me just tell you this. I started writing the slides for this talk and I thought, yeah, I don't actually have enough content to fill everything up and at the end, I had too much. So rather than take slides out again, what I'm going to do is I'm uploading the slides, but maybe I rushed through some of them a little bit. I think that's only fair. And the first thing that I will rush through a little bit is the introduction. Basically this slide just tells you I'm old. This slide has a little bit of content. I have experience in video streaming on the web through peer-to-peer networks, sorry, not on the web, on the internet through peer-to-peer networks. And that goes back as far as 2006 when I joined Juist. Ancient history, read up on it if you want. I'm going on. Ask forward a little bit. This talk is about the Insepia project. Why did I start this at all? After this experience with video streaming on peer-to-peer networks from a long time ago, I got in contact in early 2019 with a startup that wanted to do basically the same thing, but somehow involved with blockchain. And while that didn't work out, from my point of view, they were a little bit too invested in the blockchain side of things. And I guess I was too much invested in the peer-to-peer side of things. Part of ways, but that gave me the understanding that actually there is still need and also desire for peer-to-peer technologies in the world. And that meant I started looking for funding, which I found from an Aonet. Unfortunately, just after the grant was given or accepted, the challenge was accepted, the pandemic hit, and I didn't really work as much on it in the last year as I wanted to. And because of that, because of the lack of income that resulted from that, I also had to look for a part-time job. So I joined NUI Technologies. Now that's interesting in a way that I will get into later. Funding. The Insepia project is currently funded by an Aonet, which takes the funds from NGI Zero. It's the second-generation commission's next-generation internet initiative. An Aonet also collects donations. So if you ever have any money left over that you want to put into financing FOSS here in Europe, give it to an Aonet. They will try to put it towards the right projects. And also about NUI. NUI is involved in making smart routers, wireless routers. And then because of this, they got involved with two drone projects that are funded by Excel, the Excel joint undertaking, which in turn is half funded by Horizon 2020, or at least part, I don't know, a part, and part by the industry. So these are two projects that I'm active in that relate very closely to the Insepia project in some details. Why I'm doing work for NUI. And NUI has kindly let me publish some of these things in papers and there's more coming. So even though they don't actually fund the Insepia project, they do fund, in their own self-interest if you want, a very closely related work that's also making it to some degree into the public domain can be used by the Insepia project as a result. And that's a very nice arrangement to have. So if you're looking for a job that has to do with networking or embedded or drones, maybe NUI is the place for you. You would have to work with me though. Good morning. All right. So let's start with a bombshell. The web is dead. Yeah. I don't know if that's too small, but it has a small tagline that says actually it will stay on there for quite some time. I don't think I can, this is a section actually I want to skip over relatively fast because it will consume too much time otherwise. But it's very hard to understand why I'm so invested in doing something peer-to-peer when you assume that the web is in good health. So I'm running through a couple of quick points on what's actually really fundamentally contributing to the problems that we see on the web today in the web architecture. First of all, there's a timeline. You don't have to read everything here. Left is basically the development of the HTTP protocol. Right is development of sort of security and authentication technologies, roughly ordered. And I made red arrows there to sort of delineate the beginning, the middle, and the end of a mid-era of the web development. And there's a before and after, of course. So I've divided this, and you don't have to follow this, but I've divided this web history in like three big eras. The early web, the HTTP protocol was not massively specified. It got specified after a while, got a revision immediately after. And it helped establish HTTP as a decentralized protocol. Security at that time was not the highest issue. The focus pretty much was on publication, on publishing data that's already, that's meant to be public. So the idea was you run your own server, you upload documents to the server for the world to see. And that's great, but that's not how we use the web today. The.com bubble came, it burst. And then it sort of got restarted with Web 2.0. And one of the interesting parts of Web 2.0 is not that there were new technologies or something that's not actually necessarily the case. But that was an era where commercial entities tried to pull more and more users to the web. They realized that there was money to be made there, but they also realized that the model where everybody who had something to say would run their own web server and publish stuff on this web server, that model doesn't work. What you have to do is you have to take this technological challenge away from users in order to broaden adoption. So you see a lot of, in this area you see a lot of security features being added in form of TLS, new authentication methods, and also starting towards, I'll put this here, HTML5. I'll get to that a little bit later. And this era pretty much ended when containerization in the form of Docker basically came along. What happened there is that the scale of pulling all the users onto the web became too large to handle, or at least too large to handle by the conventional methods. So data center operators, Google, Amazon, all the big ones, started to look for ways to make handling this complexity easier. And so they came up with tools. I put Docker here. You could easily talk about OpenSec or Kubernetes. That's not really the point. The point is these are all technologies that help manage the complexity of scaling up the web. There's a feedback back loop here because it started out with consolidating users on a smaller number of services that led to a larger scale that leads to complexity. And now what happens is that with the adoption of these management technologies, you have even more consolidation because the easiest way to do stuff on the web now is by using the technologies that enable large scale. So we come back to the same problem again in a loop. The web protocol has issues. One of them I sort of outlined in the timeline skipped over it very quickly is that if you look at the timeline closely at every security feature that exists in the web stack has been added as an afterthought, whether it's encryption or authentication. And privacy has never been a topic because the assumption from the early web was that you would own your own web server. And if you own your own web server, then privacy is guaranteed, right? But that's no longer the case. What we can also see in these eras, if you look closely, I'll leave that up to you, is that really the developments after HTTP 1.1 and HTML4 give or take, the development of new web technologies has only been driven by commercial interests. Commercial interests really never involve them unless you assume that large corporate entities have the user's best interests at heart. And we now end it in a world where HTTP 2 and HTTP 3 is effectively designed by Google for Google scale data centers. It isn't really, there aren't really many necessary changes in there for the original model of the web where you would run your own web server. So it has to do with scale. But there's also fundamental issue that comes with the assumption that everybody runs their own web server. It's that HTTP is and isn't quite, I'll skip over the slide a little bit. Just once you understand that in HTTP everything is a resource and every resource has a unique URL. And all the HTTP methods basically operate on an entire resource. And it's completely fine when you have a document to upload, some text, 10 paragraphs, even three pages, it doesn't matter. You can upload a whole new document as a change to the existing one. The update mechanism is where things break and the partial retrieval is where things break. Partial retrieval is get with range header. I can read up on this. I'm not going to explain everything. The point is only that range headers are not really easy to support and therefore they're not very widely supported, especially not if you're doing something like running an HTTP proxy in behind that's an application server. It doesn't work particularly well. It can be major work, but it's hard work. On the update side, post and patch are defined in the specs to have a content, a request body that is specified by the server effectively. So the server implementer decides how to handle a particular media type in the request body. Really what these things mean, the update part in particular, is that it's very hard to make generic web clients. They have something like a browser that does something generic, but if you look at our, for example, an update patch method is being used, there is no generic implementation in the web browser for doing patch. What you have to do is you have to write a JavaScript that then does the patch method. The JavaScript, well, you deliver it with your web server. So you bundle both the implementation on the server side with the implementation on the client side in one go. A generic patch as such doesn't really exist. It really must lead to one thing, that the server owns the data, because really only the server implementation understands how to handle the data. There is no way for a generic client to do this. Historically we have a similar problem with HTML. Read through this briefly if you want, but the short story is that HTML5 has basically given up on standardization. Yes, there are standardizations, but it's a living standard, which means the standardization that changes all the time. What happens is that most of this changes driven by what WebKit supports. WebKit is owned by Google. It's open source. We can all use it, but you still have a very big influence of one company here on what actually happens on the client side. Now, when I mention this to people, one person responded, yeah, but React and TypeScript, they also have great influence on this. Maybe, maybe so. They're not making a browser engine, though. So for me, this is not the biggest issue here. And there's also a conceptual problem with HTML, because HTML mixes data and representation. So and also with JavaScript and processing. So it's not NBC at all. Now you don't have to believe in NBC, you can use MVVM or whatever. The point is by bundling the logic on how to manage data with the data and the backend that only can deal with whatever happens on the client side that it has sent to the client, you really are creating data silos. You're not building a web of data that can be processed elsewhere. The semantic web effort is trying to do this. It hasn't got very far as far. Well, there's a lot of development there, don't get me wrong, but most developers don't talk about the semantic web as the solution to their problems. In that sense, it hasn't really gone very far. I'm not trying to tell you here that people have way and working with this are not doing their stuff, but not the case, but the adoption is not so great. And yet, HTTP has strengths. This is why it got adopted. And I will highlight this briefly. It's simplicity, simplicity. It's very easy to get started with. The complexity arises later. And the complexity always leads to this issue of centralization that is pretty much inevitable in both the scale that we see nowadays and the HTML JavaScript combo that makes it impossible to do smart things like a patch method without this mechanism of also delivering some processing instructions to the client. So where do we go from there? Well, we can see some kind of trends here. This is a little bit, every slide is going to be a little bit different on a slightly different topic here for a second. This I found particularly interesting is only a couple of days old now. This is a response to Donald Trump being banned on Twitter and Facebook and whatever else. Basically we have with the centralization, we can buy the censorship. Now I certainly don't, I'm not impressed with Trump's track record, but I don't want to get politically, I just need to get that out here. But by having communications platforms centralized in large corporations, we effectively have come to the point now where corporations have the power to censor heads of state, irrespective of who they cut off. That's a very, very dangerous precedent and it's something that we have to be very careful about. Second, centralization has become the default for most developers, right? It's not something that people tend to challenge anymore. I got sent this, this is from Moxie Marlin's comment which looks like it's on GitHub. I got sent this with some kind of, it's old, right? It's four years old. I got sent this with some kind of highlight there that seems to say that he's happy with centralization. I don't think that's actually the case if you read the words carefully, but it certainly reads as if doing anything other than centralized approaches is too hard. And if this is coming from a very smart guy, I think we have a problem, right? He's got a track record, whatever else you think of him, he's got a track record of doing some very, very smart things. Now he's an inventor of signal and the protocol around it. And if he thinks that doing something like signal in decentralized or distributed fashion is too hard, well then our entire field has an issue, basically. Completely different from the technological and control angle, we also, sorry, the censorship angle, we also have defaulted to yielding control, right? This has even the nerdiest nerds with very few exceptions are happy with this, right? Most of us have a smartphone and very few of us run free operating systems on it because it's too much of hassle. And I sort of blame this trademark tagline there which is why I modified it. I think I fixed it actually, yeah? We're looking no longer at how to handle data, how to deal with the data that we have and give it to other people. We tend to look at the channels, right? Which channel allows us to do what? And I think that's a very broken way of looking at how a worldwide network of people, which is essentially what the internet is in the end, even if it's primarily computers. That is supposed to operate. Now, I want to get to this, let's get signed around everyone to know why the internet interprets censorship as damaging routes around it. I think this was the case when it was written in 93 where everything was very decentralized. And it's also important to highlight that the web and the internet is not the same thing, right? It tends to route around damage, whether it's censorship or something else. It tends to route around it because that's how the IP protocol is constructed. That's a good thing. But the web, by its very, by the nature of making it easiest to centralize stuff, is no longer routing around damage. In fact, it actually encourages exploitation and censorship. And that's why this quote is pretty important to put in there. That was my old man, yeah, that's a cloud section of the talk. Let's move on to something slightly better. But I wanted to go through this briefly so that you guys understand. I mean, I run into people who not long to every point here, and I don't have to explain myself pretty much. But then there are people that say, but it's good, but it's easy. But look at what it lets us do. And they're, of course, all right, no? The truth lies somewhere in the middle. The question is, how can we get the same benefits without these problems? That's what we're trying to solve here at the Interpreter Project. By the way, I just said we. That's what leads me. But since I'm working with colleagues at NEY to do something related, I'll get to that again. And I talk about this kind of problem with a lot of people when I can. It feels like there's more people contributing. Yeah, unfortunately, right now it's me. So the focus, in my opinion, should be collaboration. And in the title of this talk, I've spoken about human centricness. What I mean by human centric is that it should focus on people's needs, right? We should think of the Internet as something that serves people's needs. And really, what it comes down to is collaboration, which is why this deaf room is a good place for this talk. Because standalone computer without any Internet connection does serve my needs to write text or program or whatever, right? I don't need the Internet for that. But the moment I want to hear ideas or explore other people's ideas together, then we're talking about collaboration, really. So human centricness and collaboration, I would say it's almost the same thing, not quite, but almost. I want to look at the collaboration part first. I try to write down a list of mechanics, what mechanics exist to make collaboration happen. And the first one is communication. You can't communicate. You can't collaborate without talking about what you want to do, right? In fact, communication is already collaboration in itself in a way that you exchange ideas, you exchange thoughts, you even just touch base with people. That's already collaboration. The next step is that you want to share stuff. You want to give and receive things. And I put here digital assets because it's the Internet we're talking about. But really, in the offline world, it's like give somebody a cup of tea, right? That's giving stuff, that's sharing. I would like you to view this more as a trade. It doesn't even have to be something that is a mutual exchange, right? Where you give and receive something. But it shouldn't be thought of as sharing in the sense of social media, where you just take something that somebody else has written and broadcast it to your circles. You can do that as well. But sharing I'm talking about here is more the give and receive things. The other thing is sharing skills. Once you're working together on something, what you're actually doing is you're sharing your experience, your skills, your point of view, or it's a shared project. And that can be also commercial, so to speak, tell your services. Not saying that it's the most important thing, but it's certainly part of how we currently live our lives as a society. I think this is what human centric means because humans collaborate, right? It's the nature of people to collaborate. When that sometimes seems like it's the opposite, if you look at the state of the world these days, but we do collaborate, we do find groups to do things together with. And that is, you know, if you look at ancient history, that seems to be the greatest strength of people as a rule. So what do these mechanics of collaboration, what sort of features do you need to derive from it? What are the requirements that you need to fulfill in order to enable these mechanics? Well, first of all, communication and also sharing skills. I mean, the can be real time, right? We have asynchronous communication like email, but something like a chat, something like, you know, video conferences, they're real time communication. Similarly, working together and stuff, I can take a task and do it on my own, or we can use one of those online editors where both of us can type something. It can be real time. So we should support real time. You also need to deal with X control and ownership. I do not want to communicate everything that I want to communicate with the entire world. And unlike HTTP, which sort of actually solves this, I think we have to solve this from the beginning. Otherwise, we don't really support the mechanics of collaboration. We also have to update parts of resource. And I'll get to that in a moment and next slide in more detail, but the resolution of the resource, the entire resource, is a little bit difficult when it comes to things like video. Right? Video is huge. If you want to change a frame in a video, you don't really want to upload of gigabytes of video file. Now, selling services can mean that you want to do something like server side processing. I'm not saying that's the highest priority feature there, but this thing that HTTP allows isn't necessarily bad in itself. Right? It's more the way that it does it is a bit broken. Quick interjection. What do you mean when I talk about real time? Well, I'm not talking about hard real time where things happen simultaneously on several computers, which is hard to do anyway, but assuming it would work. What I'm talking about is that it can consume data in the process while it's in the process of being produced. And that really means that you're producing consuming chunks rather than in entire resource. And that's what this point about a final resolution is all about. It also means that at the outset of your collaboration or your real time thingy, you don't necessarily know how much you're going to produce. Right? So the size is, in the beginning, is indeterminate. So really what we're talking about when you're talking about real time is about streaming data back and forth in both directions. And I want to highlight, because I started out talking briefly about video streaming on the web, on the internet. Sorry. I use these terms interchangeably, just like everybody else, right? It's just not correct. It was on the internet. Video streaming is a great use case because like 90% or something of the bandwidth used on the internet is video traffic. It has immense bandwidth requirements. But it also has very low latency requirements. There's a study that I read that points out how dropping a frame or two in a video stream is actually preferable to even a split second of buffering in the perceived quality of the video stream. So the low latency requirements are immense. So the high bandwidth requirements are immense, so it makes for the perfect use case. If you manage to crack video streaming on the internet, you basically crack all kinds of data streaming. Now, I tried to make a table here, and I'm going to update this table so we don't have to spend too much time on this. We're going to see it again and again. Basically you can come up with solutions to these requirements, or at least partial solutions. For data streaming, you have to use something other than HTTP. I tried to point out very quickly. It's not ideally suited to processing anything finer than an entire resource, so data streaming is not really something that HTTP handles very well. There are clutches for it. They're being used. Don't get me wrong. That's not ideal. Exit control and ownership, you can probably solve with encryption some way. There are schemes for this. I don't want to get into them. Just for the sake of this talk, let's just assume that these are kind of solved problems, right, relatively speaking. Service site processing really comes down to remote APIs. I'm saying APIs because APIs is an interface deliberately between the programmers who come up with the service site processing stuff and the users of the API, which are other programs, right? One of the things that helped me in my career to understand is that APIs basically contract. An API is not something that you change all the time, APIs where something that you engage very carefully in cooperation with the consumers of the API. That's why I'm saying remote APIs here rather than something like REST or something, which is not really about APIs even if you use it like that. I don't really care about the technology here. I really want to say APIs is the thing that enables server side processing or processing that's remote if you want to be a little bit more generic about it. I added a requirement here that sort of comes more from the beginning of the talk about HTTP that we don't really need a man in the middle. I don't mean this in the encryption sense. I mean this in the practical sense, right? If I send you a file, there is no human requirement why this file needs to be sent through a server. The technical requirements that make that easiest, but it's not really embedded into the mechanic of collaboration, right? It's not something that the mechanics of collaboration require. That also leads me to conclusion that in order to enable this, we really need to move away from HTTP. It's the wrong medium for modeling collaboration as it happens in the offline world. There's another set of problems that also lead to requirements is about data locality and device that we tend to use. In the beginning with HTTP, we had the web client, the web server. Basically everything was shared through the server, right? You had two clients connected to the same server. If they wanted to access the same resource, it was through the server. That's good because it allows for this kind of collaboration with a centralized instance. But nowadays we have a different problem. Most of us at least tend to have multiple devices, right? There's my mobile phone, there's the laptop that I'm streaming here from, recording, sorry. There's other laptops I have here. I have Raspberry Pi, I have Homesore, all these kind of things. Each of them has its own local storage. Each of these storages contain data from me. Really somehow we need to handle the fact that my data, the data that has to do with me is owned by me, is fragmented across multiple different devices. This problem is only going to get worse because what we also have now is IoT and smart sensors. Now I had a chip here that I wanted to show you, but I mislated. It's basically as big as my fingernail and it's a full CPU with a USB connection. We're now in a world where it's so easy to add devices, really cheap devices to everything that we have machines recording our data in the form of smart sensors. While I don't have hundreds of them in my home, it's very easy to foresee a future where that is happening, right? And then some homes are already happening. Some of these actually have a different type of connectivity than the internet, right? You have Bluetooth or energy, you have Lora, which may make this a bit harder to deal with. Particularly when you look at Bluetooth, most of the Bluetooth devices that you have, you need to connect to. And they're sort of passive devices, right? They're not active clients that connect to a server. They're passive devices and when you connect to them, you can use the functionality. So this inverts the client and server roles. So maybe this idea of having client and server as strictly divided as in the HTTP world is no longer serving our needs. This leads to a couple of other requirements. One is with personal devices is sort of a smooth hand over between link technologies. My example there is always if I'm in a video conference and I leave my home network and I'm only in range of my LTE network, I really wouldn't want my video conference to crap out me. Sorry for the word. I do swear. I don't want to swear on camera. Just did it now. Sorry about that. But yeah, the point is in an ideal world that wouldn't happen, right? Currently, I don't know of any service that doesn't at least interrupt the call for a while while it tries to reconnect. Then the smart sensors and the multiple devices also mean we need to be a bit selective about how we synchronize data, right? I have a home server that serves as a backup for everything basically and that's back up to the cloud somewhere. And I can't pull all the data that's on there onto my mobile phone, right? I want to be able to selectively say, okay, these documents, maybe you always synchronize this document to synchronize in demand and these things I don't even care about, right? And that's sort of requirement for dealing with this fragmentation of data. And I said this in the previous slide without the red arrows. These different link technologies may mean that we have to move away from the client and serve an architecture that we're used to from HTTP. So I just added this as a kind of extra requirements with the solutions for this. For the smooth handle, we need to somehow be able to extract these link technologies. There's words for multi-homing, multi-pousing, multi-link. It's basically all the same thing on some level. Selective synchronization can be solved if we manage to treat resources, something you can subscribe to, right, individually or as a collection. We need some kind of pubs up for resources. The difference link technologies might lead to the conclusion that you want some kind of overlay network that sort of abstracts out all these differences of the link technologies in a way that IP currently does not. It's not that IP can't do it. It's that the deployment of IP is not always such that that makes it easy, right? And lastly, the dissolution of the strict client and server roles might lead to the conclusion that we have to have some kind of HTTP technology. I want to talk briefly about drones now. Oh, wow. Now the light is coming in. I'm very bright here. Okay. I can't move away from the sun. Anyway, it's working on these drone projects. And I want to run briefly through what we're doing there just to illustrate how related the requirements can be between, you know, collaboration and just handling drones. And the main problem that's currently trying to be solved is how to enable drones to fly beyond the visual line of sight. So in European regulations, and I'm going to do this really fast, there's basically three class of drones. There's toy drones, there's military drones. You know, toy drones, you are always they lie in sight. Military drones are not under control. And in the middle in the so-called special category, there is the possibility of using drones for commercial purposes, which might mean pizza delivery to your door or something like that. And the other regulations basically require that the command control and communications link as they call it, to be reliable. And I linked here in blue, there's actually a link to a paper that I wrote together with one of my coworkers on the summary of these requirements, the summary of technologies that exist and how they pretty much all of them, not completely, but not completely fail, but aren't exactly good solutions to this kind of problem. And the use case that we're having is handovers, right here as a C3 link and handover illustration from anywhere. We see a drone, the drone is connected to a satellite or to a ground station there, the yellow lightning goes to a mast, the blue lightning goes to another antenna. And when one of these goes out of line of sight, you still need to have a connection and you have this connection via the different link. It's exactly the same use cases, if I leave my house with the video conference on and step into the LTE network, right, it's just different devices. And maybe sometimes different technologies, I don't have a satellite link on my mobile phone. But the basics are almost exactly the same. So I crossed out a couple of things that weren't so important for the drone use case, but that are sort of in the same realm. The streaming requirement is also there. You want to stream command control information. Access control as such is no longer as important as its temper processing. And that's an ERs requirement. People are not supposed to modify command control messages for drones, otherwise they make them crash into buildings or something. We don't want that. Ownership of data is also not as important, but what's important is identification, right? Drones should only accept commands from identified ground stations. Siversite processing is not very important. Smoother and over is just as important as actually the use case we're working on. The only thing that really doesn't matter at all is this whole selective synchronization thing, but that's for now, right? When you assume a future where maybe a smart drone collects information from IoT sensors and relays that to the ground station somewhere, maybe that actually becomes a requirement also for drones, but right now it's not really something to talk about. But the other things are also there. Drones cannot really act as a client or a server in all cases. Drones have these different linked activities that you have to deal with. Really the requirements from drones that I'm working on with anyway, the requirements of collaboration are almost exactly the same, even if the focus is slightly different on some aspects here. So where do I want to go with this? I spoke about collaboration. That's really the thing I want to solve, right? But there are also some, no actually, here's the list again. I added at the bottom here ease of adoption, right? In a way that's the last thing, right? We talked about technological problems and how we might be able to solve them that somehow support collaboration. But one of the things that HTTP has toward us and that shouldn't be discarded is that simplicity is key for ease of adoption, right? If we come up with a massively complicated stack here that nobody wants to use, it's not going to happen. So that presents a challenge because this entire thing of multiple links and resources that you subscribe to and access control, it becomes complex by virtue of fulfilling all of these requirements. So here we have a bit of a conflict and that's a tough one to crack, but no, I like challenges. The vision is basically nothing else than what we already do, just better. We have to assume that our data, that data that concerns us, that is owned by us, lives on many different devices. But we also want them safe and malicious access. We must be able to access our data at any time. If we want to access it in parts, it should be fast. If we want to access it in full, then we can maybe accept that it can't be super fast because it's really remote and everything has to be pulled over. But if not, of course, to access it, fine. But at least the partial access needs to be fast. It would be nice if we can access our data from any device, even if it's not our own, right? If I use my friend's phone, and I use my private key to identify myself, why shouldn't I be able to pull some data from other devices that I own or that are in the cloud that belong to me so I can use the phone just to do my own thing, right? And then I discard it all again. We have to be able to share and collaborate with our data. And that's basically just writing down the requirements from earlier, the mechanics from earlier. We also want to be able to collectively allow access to our data to processing nodes. And my example here is printers, right? If I want to print a document from my phone, I don't really want to go through a print server. I don't really want to go... Sorry, I said from my phone that's the wrong use case. I mean something else. I know I have a video file that's not on video files that I've printed. I have a document somewhere on the server. I don't actually want to be able to open the document and download it to my phone in order to print it to the printer next to me. What I really want to be able to do is send an authorization to the printer over the local network that it may, or maybe a limited time frame, download the document itself and print it. That would be the ideal use case for how to deal with devices that sort of process our data temporarily. So, one of the things that I haven't highlighted enough here is that the HTTP and the server functionality part of HTTP also means that it's sort of pushing... I mean it is an application level protocol, but it is sort of pushing the protocol to stay there. So, our adoption of HTTP is something that approaches infrastructure is really quite broken. What we really need to do is... That's my opinion here. That's why I'm talking to you guys. What we really need to do is focus more on infrastructure for collaboration. We need more infrastructure protocols. That's really what I'm trying to do here with this whole project. Build a stack of infrastructure protocols that allow us to then build applications on top of it without highlighting them in the same way that the web does. Very briefly here, philosophically, we have to distinguish a little bit between peer-to-peer, peers and hosts and peers and posts. People tend to conflate that you and it's not the same thing. The host-oriented part of peer-to-peer is something that enables the human-centric part. That's really all I want to say here if you want to look at the slides by Augusto. I have to hurry up a little bit. There's not so much time left in my lot. It's a lot here. I want to talk about what I've done so far in this and where the progress is. There's a small library. I'm skipping all of this. It's just a platform abstraction library that I've been building along the way. You can use it. You can add to it. It's not anything magic, but it's nice and small. There's Packeteer, which is a Vend-based socket library. There are plenty of similar projects out there. The focus of them is a little bit different. When I started this, which is a long time ago, it didn't really exist in quite this way. That's why I stuck with this. I'm not saying you need to pick this for your projects, but that's what I'm going to use going on. Check it out. There are some problems with it. The Windows port works. It's a little bit buggy. I need to rewrite this a little bit when I have time. The OZ part is fine, but it can use some cleaning up. I would like to add a SketchRIO as much as possible. It's really not very important right now, but it would enable slightly more high-performing servers. There are tweaks extensions that I want to add to it, but it's largely finished if you want to play around with the Packet-oriented network protocol. You can use this already. Channel as the library I'm currently working on is very work in progress. It's really trying to do a multi-channel protocol, much like the channels on HTTP3.2 as well. The point is with P2P you have to maybe penetrate ads firewalls. Once you've done that, you really want to stick with this one port combination to talk to. In order to then multiplex different data streams through this one connection, maybe it's best to have the provisions for some kind of channels in the protocol so that it gets easier. That's what I'm working on there. It's also a bit of an abstract protocol that is very ready for extensions. One of the extensions I'm already thinking about is how to add encryption to this part of the requirements. I'm looking at doing it in more or less the same way as the WireGuard does. There are problems with WireGuard in this context, but it really needs to fast-run the discussion. I'll skip over this. There are also provisions and therefore adding the multi-link stuff that I'm doing with NUI basically. I want to add some channel-specific settings for how to deal with resins and reliability. It's currently UDP-based because I want to be able to ship it in apps basically. I want people to be able to build applications with it right now. The way I'm working on it, it should be possible to run it directly on IP or on Ethernet. This is not really necessary at the moment, but I'm sort of looking into a future here where maybe that becomes interesting. Maybe hopefully. That's the whole point of the Next Generation Internet Initiative. I'm sort of staying focused on this a little bit. I spoke about multi-path, multi-link. With NUI, we work on this as part of the Adacasa project. I've prepared a paper that I need to finish in the next few days, actually. That's focusing on the abstract protocol, the messages, the state machines. It should be finished in draft this month. I'll probably make a pre-print available and then it should be published sometime in the middle of this year. That's really the model that I want to use for channel-less multi-link support. That's pretty much where we are. I think for a year where I barely could work on it because of the pandemic, it's not actually terribly bad. At least that's what I try to tell myself here. I want to salvage things. The future. Where do we go from here? Let's assume that this problem of efficiently streaming data between peers is solved, which will still take some time. Including the multi-link part, including the multi-channel part. We need something like a distributed hash table in order to find peers in the network. I didn't put this on the slide, but we also need the whole NAT-nutrition thing, sort of stun, turn, their implementations out there. Maybe it's better to add them to the basic protocol so that it's not too much of a collection of different things. The next part that can be layered on top of the bare data streaming, if you want, is to have a streaming protocol much like PPSPP. That has two modes. In order to access a resource that already exists that's finished, you can use a mercury to identify specific parts of the resource and download them in order to complete your picture that you have locally. But there's also a streaming mode. While some of the things in the specification don't really fit anything other than TCP at the base, it has plenty of interesting ideas that can be adopted. Lastly, I put here a distributed file system. This goes back to simplicity. I suspect that the API that most programmers will be most comfortable with is FileIO. We have things like local sockets which are file-like in how they're presented, but actually are streaming connections, if you want. A file system like API, it shouldn't be exactly like a file system. Maybe the solution for modeling how we access resources that belong to us, resources that are on other machines, streams to other machines or the people who use them. There is a precedent for this. I'm very glad to know that it's gaining in popularity, that's the Plan 9 operating system. It actually has a 9P protocol that is a remote file protocol. It does nifty things like exposing the screen of your machine or a window or a map pointer as a device that you can read from. It makes using what the hardware provides very, very easy. Maybe that's the direction you have to go. It's a bit fuzzy, my mind is still something I have to think about a bit more, but I have some ideas in that direction. I'm already over time, which means the Q&A section is going to be a bit shorter, but let's stick with this. Quick note on how you can contribute. Very easy. I'm not very useful at something like presenting this project. This talks the first, and I'm giving help with design and does design identity guides is going to be appreciated if you are not technical. Website and documentation templates would be appreciated because that's just not something I'm very good at. Any feedback you can give on the protocol work, so that means reading the papers, looking at or just talking to me, that's going to be appreciated. The project is relatively financed. I'm not here standing asking for donations. In fact, I'm not really set up to receive donations. That's all taxable income for me. But I would like to start a foundation or a nonprofit or a public interest company, whatever works, so that we can accept donations and scale this up because one of the things that would be very useful would be have a handful of developers to work on this full time. I'm not even managing to do full time myself right now, but that's sort of the thing I would like to work towards in the next year. If you have fundraising experience for this kind of thing, then yes, please. That's the contribution I'm looking for the most in a way because that would allow me to scale this thing up in the way that I think it should be. And that's it with the Q&A section. I just want to point out there's a website. It's basically just a placeholder. There's my email address. You can send me an email. I will try to respond at least. The last URL is a blog where I talk about protocol design a little bit. It's not massively large yet and very, very rare that I post something new, but it's there. And now we're at 53 minutes and I have to stop. Thank you. Thank you.
|
The Interpeer Project attempts to provide the technical underpinnings for a human centric next generation internet. As sensors and compute nodes are now (close to) ubiquitous, it follows that there is no longer a static or traceable relationship between ownership of a physical processing unit and the personal identifiable data it processes. A future internet architecture must take this into account, whilst respecting and protecting user's privacy and data protection concerns, also from a regulatory point of view. At the same time, sharing data in this proliferation of processing units also favours distributed approaches over the web's decentralised architecture. This session outlines the future the Interpeer Project envisions, and reports on achieved outcomes to date. The Web has was designed with sharing of textual information in mind, and has by now outgrown this purpose. In the academic context in which it was conceived, and considering the technical constraints of the time, it made sense to design a centralised protocol for up- and downloading documents to a server managed by an institution or company. The Web has long evolved away from this, and added authentication, authorization and encryption as natural afterthoughts to the original design. Nowadays, we share more than documents. Our concept of sharing has evolved (for better and worse) from only adding to the public domain to selectively trusting groups or individuals with specific pieces of information. While services built on web protocols can and have modelled this new concept, it remains difficult to do well. This and financial incentives combined push developers to instead adopt half measures, whereby a central instance - the webserver and its legal owners - act as intermediaries to the process, weakening the sharing model to commercial, state or criminal exploitation. There are excellent organisations fighting to amend legislations to close such loopholes. The Interpeer Project recognizes that aside from the legal struggles, the practical consideration remains that it is much simpler for developers to build centralised, vulnerable products than those safe by design. It aims at making safe data sharing applications as easy or easier to build by starting from the ground up, and embedding the security and synchronization concepts for a highly distributed network directly into a new and open protocol stack. Empowering users in the context of this project means enabling them to do everything they're currently used to and more, but safely and without the strict need for centralised infrastructure.
|
10.5446/52397 (DOI)
|
Hi, I'm Sylvia Makovey from XWiki and today I'm sharing some of the lessons we learned while collaborating through global pandemic, the challenges we've been facing and the changes and the tools we implemented to address them. But I'm really aware there's no one size fits all. I just hope that sharing our experience will prove useful for teams navigating similar challenges. I had contemplated looking into this subject since mid-December as I was thinking about the impact that the current health crisis has had on collaboration. The irony is that a couple of weeks later and a few hours after I had submitted this topic, my COVID test also came back positive. I had a fairly easy form but as I'm still recovering, it's put things further into perspective. When we look at crises like the COVID pandemic, there are a big reminder of the importance of collaboration in all walks of life. For many of us, being able to collaborate remotely has been in many ways a privilege. Prior to XWiki, we've been working distributively, building open source software for 16 years. More specifically, we've created a tool that helps team to capture and organize their knowledge. Some of us were working remotely full time, other colleagues opted to go to the office. The majority of the team chose to blend the two, depending on our preferences and needs. Almost a year on, we see that collaborating remotely during a pandemic is not business as usual. Teams need to work well together and adapt to the new circumstances. However, a crisis of this magnitude has many effects, including anxiety which can take a big toll on our emotional and psychological health, potentially snowballing into burnout. So what have we learned so far? Number one, the importance of creating safe spaces. Feeling like you need to have your guard up can be exhausting and take an emotional toll, which is why it's so important to create a safe space for everyone. A safe space is an environment where people feel supported and respected. It's not about limiting free speech and opinions that may not align with the general consensus. Quite the contrary, it's a medium where people may freely ask questions, challenge assumptions, and be vulnerable without the fear that they might be judged or look foolish. Number two, communicate and reinforce goals. If you want people to work collaboratively, it's so important to make the organization's goals clear and involve the team in defining them as much as possible. We also need to know how our personal work contributes towards achieving those goals. Even if things look crystal clear on your end, they might not be as obvious to the team or the community. Assuming the objectives were clear to begin with, though, people may still wonder whether the changing circumstances have had an impact. In our case, the company newsletter is a monthly blog post on our wiki where we share updates coming from all teams and the community, and it really helps us stay up to date on the company life, particularly across team, since there are less of those serendipitous run-ins that we had in the office. Because we can't know what we don't know, it's important to also ask the team regularly whether they have questions and try to address them. At Xwiki, we implemented a form that people can fill in to send their questions and feedback, be it anonymously or not. Then we try our best to answer them through our own hands and through the newsletter. The third lesson was being mindful of the common issues. There have been particular challenges that have affected multiple people inside the team. Caregiving, homeschooling, and the general increasing needs of home life have been particularly difficult. On-boarding interns in a fully virtual environment is also something that we hadn't done before and we all needed to adapt to. Number four, staying connected with one-on-ones. Now, the above being said, when we don't see people in the office, it's so easy to make assumptions and fill in the gaps about their work and personal circumstances. However, everyone's personal circumstances are unique. If you take a moment to think about how this crisis has affected you, you know that people can't begin to understand unless they're explaining some of the details. So taking time for one-on-one discussions is essentially if you want to make sure that each team member is getting the tailored help and the attention that they need. It also avoids the trap of oversimplification and putting people on in fixed boxes like parents or students. Number five, be flexible but avoid the trap of always-on. Demands for flexibility have been around for decades and according to the flexible work report published a couple of years ago by Zenithit, 77% of employees consider that flexible work is essential for their work. So we can assume the number could have only gone up in the current climate. And this is not an easy one, since the biggest challenge around flexibility has been finding balance and it's not a marginal issue. Finding balance is the aim of flexibility in the first place. So with technology, it means we can work from home and vary our hours to the point where we can end up living in a 24-hour workplace where there's a continuous pressure to make yourself available. At Xwiki, we've been working flexibly for 16 years and some of the things that have worked for us have been having a set of core hours that people should generally be around for, although we're also flexible about that, depending on the circumstances, using a calendar so colleagues know who and when they can reasonably expect you to be around in case they need you, using our own tool to document things and to stay organized so people can find information when they need it, especially if some of us are offline. And last but not least, encouraging the team to communicate and find the way that best aligns with their work needs so that could be both in sync or async communications. The teams do need to prioritize asynchronous work, not only because it gives everyone more flexibility and time off, but also because it makes long stretches of uninterrupted work possible to get in the state of flow. Number six is pretty obvious. Tools are important. Doing a great work remotely on the long run requires some essential tools like a good computer with a working internet connection, home office equipment and software solutions that will allow the teams to store and organize knowledge, plan their work and to communicate. At Xwiki, we've always preferred to work on laptops to give us more flexibility. So on this front we were covered. When it came to office space, some of us already had one at home while others had to adapt. Since we're not planning on going back to the office in the next months, some of us also took our equipment back home, like this beautiful chair behind me. Software wise, we were pretty lucky since we like to eat our own dog food and use Xwiki, but also because we'd always been in favor of a remote first approach. One particular challenge we encountered during the period though were the skyrocketing costs of the utilities. Working from home is not cheap. Better internet, electricity, heating, even coffee and snacks can easily add up. These came up in our feedback forms, but we also saw them on our own bills. So together we decided to introduce a monthly work from home stipend. Number seven, being respectful of privacy. The results of a recent poll conducted by the Fenwick Privacy and Cybersecurity Group suggests that companies are struggling with remote work security and data protection practices. Close to 90% of employees are now handling intellectual property, confidential and personal information inside the home. So it's no surprise that organizations are making effort to improve on these points. What's concerning though is that some companies actions push the boundaries of people's privacy. Like in the offline world, I've discussed with plenty of decision makers who said they would love to do remote work in their organization if only they could record everything that they team did. Well, there has to be a better way to do things. Monitoring can take various forms. It could include reading emails, tracking online behavior, taking screenshots, physical location tracking, webcam surveillance, or even taking photos of employees at their desks regularly. Monitoring is specifically concerning when people work from home as there is a naturally increased expectation of privacy. The home is particularly protected and it also serves other functions and cannot therefore become solely a workplace. Knowledge of being tracked can also be stressful and demoralizing and making people agree to being tracked does not constitute valid concerns. When the home is being shared, the potential harm can also extend to family members. Research on productivity while working for a home suggests that people tend to be more, not less productive. The bigger challenge actually being the inability to disconnect. So it seems like common sense that people should be given objectives, trusted to do their best work, and evaluated based on those results. Even so, doing performance reviews is very hard and that's number eight. How does one even begin to evaluate performance in such times? How do you stay fair when everyone's circumstances are so different? I've previously stated that the objectives of any company could have changed during this time or any communities. Shouldn't that also apply to the individual objectives? Can we reasonably ask people to meet targets that were defined and assigned before any of this began? These were all questions that we struggled with. We knew some companies had decided to cancel evaluations altogether, but we chose not to pursue this path. We saw these discussions as a good opportunity to exchange feedback on how the team felt about the last year, but also the work they had performed and how they saw things progressing. Number nine, getting together and celebrating. We really enjoy getting together and doing team buildings. Celebrating important events or just hanging out. So for us, it was really important to continue these traditions as we moved fully online. Our Rosemate HR team organized virtual egg hunt, our team activities, Halloween challenges, quizzes, and even managed to get Santa and Mary close to go live and meet our families. One thing that we really missed though was getting coffee together in the morning or staying late for drinks and chats. And for that, we now have Tuesday virtual coffees and first day drinks, pun intended. These events are opt-in, so people should only join if they feel like it. Secondly, it was important for some of the activities to be acing so the team could join at their convenience. Number 10, this one is essential. We need to take care of our health as I've seen it. COVID brings many concerns like the fear of falling ill, being socially excluded or quarantined, fear of losing our jobs, a concern for our loved ones, being pushed in just too many directions. And responding to the demands of lockdown life can be physically and mentally taxing. And symptoms like anxiety and depression can also be common. At the end of the day, you may feel like you don't have the emotional bandwidth or the mental health space to take on more projects or to do creative work. Really health does come first. Now all these points, my team obvious. And that's because they are obvious. We can't re-invade the wheel in 20 minutes. But it's often the evident things that get overlooked when you strive to work well together. And my final point is that we should also remember and keep honoring open values. Concerns communities are organized around the values of trust, of transparency, open dialogue and feedback, meaning growth, learning and resilience. At the moment, it could be increasingly tempting to do things on our own. But it's binarying these open values, the ends collaboration that will get the best results. Thank you for watching this. And although this is pretty courted, I should normally be around if there are any questions starting now.
|
With the global pandemic, many teams have turned to remote work on a full-time basis. Prior, the XWiki team had been working distributively, building open-source software, for over 15 years. Some of us were working remotely full time. Other colleagues opted to go to the office. The majority of the team chose to blend the two, depending on their preferences and needs. Almost a year on, what seemed a temporary change feels like a permanent arrangement. Being able to work entirely remotely has been, in many ways, a privilege. But collaborating during a pandemic is not business as usual. This talk reflects on the specific challenges we faced, the changes and tools we implemented to address them, and the ways we tried to support each other through this uncertain period. While there’s no one size fits all, we hope sharing our experience will prove useful for teams navigating similar challenges. We’re in this together.
|
10.5446/52399 (DOI)
|
Hello everyone, welcome to the talk about how to make a public website with Xwiki in 20 minutes. I'm Anka Luka, I'm a web developer and I have been working in open source with Xwiki for about 13 years. For those of you that don't know Xwiki, it is a wiki platform, an enterprise wiki platform with all sorts of features for using the enterprise to make knowledge bases for people to work better together that has some extra structuring and some extra customization capabilities that make it very easy to build all sorts of stuff on top of it, like small applications or various customizations. Today we're going to look at the presentation features that allow us to make an easy public website, but if you're interested in more, you can check out my talks from the previous years that I showed them where I showed how to make an intranet quickly and also I explained how to use it as a development platform. This is the xwiki.org website, if you want to download Xwiki, you just go to the download screen here and you have multiple choices, you can install it as a Debian package or a Docker image, so it's very easy to set up Xwiki, but you can also download the zip and zip it and then start it from your own computer. This is what I have done, so this is a standard Xwiki, this is how it looks when you take it out of the box. Now the use case we're going to look at is the use case of a public website for documentation of a software project, yeah, it's kind of the easy example. So we're going to, first we're going to create, for example, a couple of pages, let's say I want to create a guide here, this is my page with content and I want to also create some sub pages here, like admin guide, for example, and developer guide, right under guide as well, I'm going to create a developer guide. So this is how we create content, it's really easy, you create pages, you fill them in with text, all this is really normal and really regular for every usage. And I will have here the three of pages that I have created, but I'm not happy with this UI because it's too much oriented towards applications, towards collaboration, towards pages. I want to turn this into something nicer, so maybe I would like to keep the navigation bar, but then maybe I want a banner here with some advertisement for some features and the menu underneath it, I want to clean up all this stuff here on the right on the left so that it's a lot friendlier and nicer and sleeker. This point is a little bit full of stuff. So I'm going to start with the simple things, I'm going to completely get rid of the panels, I'm going to miss this navigation here because I'm going to completely lose it. So getting rid of the panels, then we're also going to get rid of the stuff on the bottom, we're going to keep the comments and we're going to keep the history because these are interesting for everyone, but we're going to hide page attachments and we are going to hide the information panel. So at this point, I have a better and cleaner screen. However, I would still like my stuff on top of the page. So what I'm going to do, I'm going to go into administration and we're going to set up a menu. We're going to call it top menu, for example, a top menu and actually this is really easy, the creation of a menu because it's just a wheezy, we get it to her with a list and with links inside this list. So I'm going to remove this here and I'm going to add a link to the page with a guide that I just created just before. So I'm putting here the guide and if I want to create sub menus, I just create sub list with the same kind of stuff, admin guide and the pages, I can choose it like this, for example, admin guide. Done. Well, done. I could also put dev guide, but that's about it. It's not done yet. I also have to choose where this menu should be displayed under the page header and it should be visible on the whole wiki. So I have my menu that is displayed here. So make a quick refresh because it's not really the proper color. So I have my menu that is displayed here. Now what I said is that I would like a banner with some advertisement also on the top part of the screen. So for that, I will install a new extension which is called xcarousel. There's a lot of extensions in xwiki that are already built that you can actually install and use to solve various problems from content to presentation. This one is a presentation one. I will show you later some content extensions. So I installed a carousel application. If I go here to the application index, I will see my carousels and I'm going to create a new carousel that is called top carousel. Yeah, all is top and all is quite straightforward in terms of naming. So I'm going to configure a slide. Let me put an image that I have somewhere here like this one for example. Done. We're going to put a title to it, some size of the title and the color. So I can configure lots of nice options. It's quite cold and dark. Lots of nice options that allow me to actually customize lots of things related to how it should be displayed so that I can do it the way I feel it. I can also put a link. I can also make this carousel entry link and I can also add additional slides. For now, I'm just adding one slide. I have activated autosliding, so this will happen as it should. Let me delete actually these two. So I have a beautiful carousel, but it's not yet on the top here between the navigation and the menu. Now, normally I should be benefitting. I should be using a functionality of X-Wiki which allows you to have to insert various things in various places of the UI. But this menu has already done it. It's already using the UI extensions that insert stuff in the UI, so I will just put my carousel in the menu. For that, it's actually a hack. Normally, we do it a lot cleaner. I would go to my menu page and I would edit it in object mode. I see the famous thing that I told you about the UI extension stuff that allows me to add new functionalities. And I'm just going to hook it here. Top carousel. That's how you use a carousel. You make it, you configure it as I showed you, and then you use this wiki macro. This is a wiki macro in order to configure it where you want. So it's done. It's on. My home page so far, it's almost nice. What I don't like yet, it's not yet perfect, is that the colors are a bit too blue. I would like to make it a lot whiter and cleaner like a public website. So in order to configure that part, I'm going to go to the look and feel, to the themes, and I will customize this theme, the standard iceberg theme. First of all, I will choose a logo that is blue. This one. So I'm going to choose a logo that is blue. Then I'm going to change some colors here in the navigation bar. So what I'm trying to do is actually I'm going to reverse the colors here. What was blue, I'm going to set it white, and what was white, I'm going to set it blue. Done. And this one also needs to become blue. Okay. Save and view. Let's see how it looks. I need to refresh it hard in order to re-take all the CSS. It's nice. I have some shadows here that are quite ugly, and I'm going to take advantage of one feature of the color themes of Xwiki, which is that I can do my own CSS here. So this correct CSS would be something like, actually, you discover this by looking at the markup and the styles and everything that is needed, but I have already done that. So I know that this is the class that I need to modify, and I'm doing it now. I'm setting both because they are both set, and if I don't set both of them, one is going to overwrite the other, and it's still not going to work. Okay. And I'm going to hard refresh in order to get the new CSS, and now it's all clean. It's all white. This is quite beautiful and quite nice. So far, this is quite nice. So what I would like to do next is to, I would like to change this homepage because I would like to put my own stuff on it. For example, I'd like to put this video in the middle with a title, get rid of all this navigation and the title on the homepage because this is quite redundant at this point, but I'm going to leave these parts here on the bottom. So in order to do that, I will do, I will edit this in wizarding mode because it's the simplest. I'm going to get rid of this content. This is probably two. So if I want to go through the wizarding, I can go through the wizarding, but I can also go in the source, in the wiki code. This is wiki syntax. Basically, if you don't know it, you will have an opportunity to learn it. It's not very difficult. Actually, so I'm going to do a bootstrap row with a column of 12. Xwiki is built on top of bootstrap. So whatever markup you like from bootstrap or whatever components you like, you can reuse them and you can actually make lots of cool things without having to write too much CSS. So what I did, I added a row, a column of 12 positions, a title, the video here as it was before, and then I'm leaving on the last row that had two columns. Let's see how it looks. It looks not that good. Actually, apparently I messed up the markup a little bit. Let me check really quickly because I forgot about the velocity macro that is supposed to be closed here. Done. Now it looks quite nice. So next, what I want to do is I want to get rid of these things. As we saw just before, I can put some CSS in the color theme, but what's even nicer is that I can actually put CSS in the page itself using this mechanism of objects of type, style sheet extension. This page already has one, but if it's not there, you can add your own. So what I'm going to do is that I'm going to add some CSS for the making the hierarchy document title and the line after the document title completely disappear. I'm going to refresh this. And so this is my website, my public website, but it's quite nice so far. So I have a home. I have the navigation that I guide completely. I'm controlling fully the navigation. If I want to update this menu, I can go in the administration and update this menu. Now, what I would like to do next is to add some more data, some more information to my public website. I would like to add some tutorials. But since tutorials address to a certain type of public or people with a certain experience, I would like to add this structured metadata to that, to those pages so that people can find and filter the pages easier. So I'm going to use the app within minutes capability of X-Wiki. It's a functionality that allows to create a structure and to have pages that are structured, not just regular pages. So I'm going to create my application. I'm going to say that each tutorial will have a title just like a normal page will have a content just like a normal page. And in addition, it will have some metadata that would characterize this page. I can add lots of stuff, lots of different stuff. But for this example, I will only add a static list with the level of the public. This address is to, for example, we're going to call it a level or more like an audience. Let's call it an audience. And we're going to say we have an options that are devs. We have another options that are users, like developers or regular users. This is validated like this. On the next step, we can configure how this content is created. And on the last step of this wizard that allows to create an application, we will configure the homepage of this application. That's it. We don't have other metadata. So I added the column for the audience. Now I have my application. It looks like this when I'm a content creator. I'm going to show you right away how it looks to people that are not content creators. And I can add a new tutorial, how to create a public website. As you can see, I have the title just like I had before. I have the content where I can type the content of my tutorial. But in addition, I have this metadata that I can use in order to qualify better the content that I'm adding. So now if I go to the homepage of the tutorials, I have this here and it allows me to see the data easier. If we're going to check how this looks for a regular user, I opened the private navigation window. So this is how it looks for a regular user. We haven't put it in the menu. So let's put the tutorials in the menu as well next to all the other stuff. Edit here and let's put the tutorials so that they are accessible to people that only have the navigation menu to go through. So if we go back to the guest, they have the tutorials link here and they can see the tutorials. And besides the fact that they can access it like this through the table, they can also filter it through the metadata. And this can be very interesting and something that you cannot really do with just content. The search is still here. So if I type in the search public website, I can still find the content. So the fact that it's a structured content does not prevent me to find it properly through the search and everything. So now this feature of structured data was already used multiple times by the community to create all sorts of interesting applications that already exist and that you can install. So if we look for the blog here, we find a blog application that already exists and that we will want to use for our public website because the blog is very interesting for a public website. So let's install this. It's done. It's installed. While we're at it, let's also add it in the menu here. Let's put it here. As you can see, this is really easy. We just create links and we put them in the menu in order to guide the navigation as we want it to. So now we can go in the blog. We already have a blog post, but we can create a second one, a second post, for example. And this is a structured application like the one I already created that has some content and some metadata. I'm putting just random content here, random content, and let's also publish the blog post so that we have some nice stuff. Now we have a blog that is accessible from the menu, but what I would like is to also add this blog or the list of articles on the homepage. So what I can do is that I can edit this homepage. I'm going to make three columns, not two, and I'm going to add a column for my blog on top of these two. So it's going to be to the left. Let's call this blog. Sorry, blog. Let's make it a link. So this is WikisynTax. You can learn it from the manual or just get used to it by using it. And if I remember correctly, the macro is something like this, blog post list, and layout cards. Something like this. Let's check. So we have our third column with the blog and dynamically the blog posts will show up here shown on the homepage. Now, one more thing that we may want to do if we want this site to be public is to set up the rights. So if we want to check the rights, we can go in the rights section of the administration. I'm going to say that if we want unregistered users to post comments, we're going to require a capture because that's best. If and we will also say that unregistered users cannot edit the page regardless of how the rights were set. We're also going to check the rights here. Unregistered users don't have view, but then again, nobody has views. So this is by default the behavior of Xwiki when nobody has view. Everybody has view. And we're also going to set comment to unregistered users here. And I think that nobody has a register rights. So guests also have the register right by default. So let's see what this gives for a guest user. As a guest user, I can add a comment here. So because I just gave the right to comment to a guest users. So I can add comments and I can also create an account from here. So I'm able to create an account and contribute to the wiki. That is about it, I think. There's lots of stuff that still can be done. We can do lots of new configurations. We can set up a footer. We can set up lots of beautiful and nice stuff. But I think this should give a good idea of how to start. And also that's about it for the 20 minutes, I think. Thank you for watching. And I will be available for questions right after. See you at next fall then. Hopefully we'll be able to see each other face to face so that our speeches are more clear than this. Thank you.
|
Two years ago I showed how to use XWiki as a development platform to build collaborative content centric applications while last year I did a short demo about how to use XWiki to setup a collaborative intranet in just 20 minutes. This year I propose a demo about how to create a public website with XWiki, and use XWiki like a content management system (but a collaborative one). The appeal of XWiki for such a usage is the possibility to integrate all usages in a single tool (intranet - see last year's talk - or any other content centric collaborative platform - see the talk from 2 years ago), while not completely missing the presentation features of "classical" content management systems.
|
10.5446/52400 (DOI)
|
Hello everyone. First of all, thank you to the FOSDEM organizers for selecting our project and thank you also to you for being here for this presentation of CEM-Apps, which will last 50 minutes. My name is Adrian, I'm an agronomist and a friend of the Virtual Assembly. I'm a user of CEM-Apps through a project which aims at interconnect agricultural actors in Normandy, France. So, what is CEM-Apps? CEM-Apps is an open source software project. Its development is coordinated by the Virtual Assembly, a French-based non-profit organization bringing together about 50 developers and activists. In a context of strong fragmentation of dynamics in the field of transition, VA aims to develop commons such as digital tools, methodologies and projects to promote the interconnection of transition movements. CEM-Apps is one of the Virtual Assembly's core projects. Its mission is to foster interconnection between communities by creating synergies between their platforms. The project is starting. It should reach maturity during the year 2021 with around 10 first implementations. Its code is released under the Apache 2.03 license. On this image, different actors, a company, a university, a community and two people use CEM-Apps and a pair of ontology that we are going to present you. Each one of them are autonomous. By sharing same standards in a common language, they can however exchange data in a decentralized way. The core team of CEM-Apps is composed of Simon Louvet, Sebastian Rosette, Jeremy Dufres, Yannick Duté, Matilde Savage, Thomas Frankart, Pierre Bouvière Moulere and the one I'm replacing actually, Guillaume Rouillere. Let's talk a bit about the context. The hypothesis that the Virtual Assembly is putting forward is that the web and the social structures of our societies suffer from their centralized and siloed architecture. They prevent the logics of interoperability, communication, mutualization, collaboration, which leads to a lack of efficiency of our technical and social systems. We identified six main problems that we are going to present. The first one is about online privacy, security and trust. The cause. Social networks centralize all their user data into one platform. The effect that our ownership issues lead to mistrust and fear to engage. The social network operator can see all the data and in order to share it has to own it too. The need, distributed identity, authentication, reputation, trust and search mechanism allowing each actor to keep ownership of the data and to share it with flexible access control rules to any other actor or group of actor in the world. The second problem is the fragmentation. The cause. Each social network operates as a silo, specializing in specific data tips. The effect. Data, interactions, identities, profile and virtual meeting spaces are fragmented. As people need to work with different actors, each using a preferred silo network, each actor's data becomes fragmented over different networks. Each user also needs to register on each application to have many passwords, to fragment its identity, to duplicate its data, thereby losing information, efficiency and time. The need, data and identities should easily be linked across the world. Context bound information. The cause. The meaning of mostly JSON data published by current web services are still context bound to the services provider and make poor use of linkage, which prevents any materialized development of application and services. The effect. Each web project develops one application with one database leading to strong inefficiencies. The need. Use semantic web standards in order to decontextualize information, making it linkable across website contexts. Bore. Lake of interoperability. Cause. Each web's 2.0 API is different. The effect. A user must register in the platforms of the different communities is involved in if he wants to exchange something across them. While a programmer must write different code to interact with each website. Even when the best types of objects are working with is the same. The hypothesis that the virtual assembly is putting forward is that the web and the social structures protocol of our social DP and standard ontology will generate interoperability and therefore communication, collaboration and true mutualization. 5 suboptimal technologies for the development of web applications. The cause. Each web application requires strongly entangled code between client and server. The effect. The cost of building web applications cannot be amortized easily with a lot of duplication of functionalities across platforms. The need. Using linked data platform enables generalized accessible web components that could work with any web server across the world, dividing the needed time for developing web application onto many stakeholders. 6 clash between economical and ethical motivations. The cause. In the current web of massively sealed centralized platform, the user get free access to a service in exchange for his data. The effect. Value created by the crowd is captured by very few actors. The need. Enable people to take back data ownership and the attached value. Our objective is to use open protocols as a way to foster the development of P2P social networks on the web and in real life. This approach should allow us to build much more ethical and efficient systems. At the center of this diagram, we have open protocols, W3C standards and solid specifications. Not a company, not a platform, not a technology. We can read this diagram as follows. If the first platform uses these protocols to empower its community and if the second platform uses these same protocols to empower its own community, then platform 1 and 2 will be interoperable and communities 1 and 2 will be able to connect, communicate, cooperate. Thus generating a network effect and so on. If the third platform uses these same protocols to empower its own community, then it will be able to interoperate with platforms 1 and 2 and with communities 1 and 2. And the network effect will strengthen and as the ecosystem grows, as the platform pulls their code, their resources and their user communities. At some point, as with the open source movement in its fight against the proprietary software, one can imagine that interoperable and cooperative systems will be more efficient than siloed and competitive systems. Technically, CEMAPS is based on the solid specifications. So let's discover the solid project itself. Solid is a proposed set of conversions and tools for building the centralized social application based on linked data principles. Solid is modular and extensible. It relies as much as possible on existing W3C standards and protocols. This is an international project led by the inventor of the web, Tim Berners-Lee. Solid aims to complete the stake of web standards to enable the development of a distributed, secure and social web. Solid is a set of specifications related to identity, profiles, authentication, authentication, authorization and access control, content representation, reading and writing resources, social web app protocols, recommendation for server implementations, recommendation for client app implementations. These specifications must allow the development of a distributed, secure and social web. Hey, I'm Pierre and together we will dig into the next standard CEMAPS is using ActivityPub. If you're keen on alternatives to social networks in GAFAM, you might know Mastodon, Peertube, Funkware. So you probably have heard about the Fediverse and ActivityPub. So a bit of concept first. ActivityPub is a protocol standard for decentralized social networks, which can be also extended to create various kinds of federated apps. So it's quite simple. The standard comes with two different ports, a client to server API and a server to server API. So the service, the application can decide if it wants to implement only one or both of them. It's not mandatory to implement both. And the decision will just depend on the needs of your project. Also, if you're a front-end developer, you might use the client to server API. And if you're focused on the federation part, you will focus on the server to server. So what are the ingredients it will require? The message itself will be formatted with ActivityPub and it must be attributed to an ActivityPub actor on the left. The actor must be discoverable via webfinger and the delivery itself must be cryptographically signed by the actor. So it's quite complex. Let's see a precise example through Mastodon. Mastodon is a part of the Fediverse for which you have an overview here. And it can talk to Pertube, GNU, Social, OpenQL, etc. So first part of the standard is the actor. The actor is a publicly accessible GZNLD document answering the question who. GZNLD can be a bit tricky, complicated, but it's also possible to use a simple GZN with the at context attributes. So you see here. So here is what the actor document could look like. The ID must be the URL of the document. And all the URLs have to use HTTPS. It's mandatory for ActivityPub. You need to include an inbox. Even if you don't plan receiving messages, this is a Mastodon example. It's a legacy proposal. So Mastodon requires it. The most complicated part of this document is the public key because it involves the cryptography. The GZNLD will in this case refer to the actor itself with a fragment, the part after the hash to identify it. This is because we are not going to host the key in a separate document between we could. Webfinger. Webfinger allows us to ask a website, do you have a user with this username and receive resource links to the response. So subject property here consists of the username and the domain you are hosting on. This is how the actor will be stored on the Mastodon server and how people will be able to mention in the tweet, in the publications. Only one link is required in the Webfinger response. And it's linked to the actor document we previously seen. So the message. The message in ActivityPub consists of two parts. The message itself, the object and a wrapper that will communicate what's happening with the message, the activity. In our case it's going to be a create activity. Let's say hello world response in my publication about writing the blog post. With the in reply to property we are changing our message to a parent. The content property may contain HTML. Of course it will be sanitized by the receiving servers according to their needs. You see different implementations when defined used for a different set of markup. And Mastodon will only keep PBRI and span tags which is specific to Mastodon obviously. So next question is how do we send this document over? Where do we send it? And how will Mastodon be able to trust it? Which is HTTP signatures to deliver our message. And we will use a post to the inbox of the person we are replaying to. Here the key ID refers to public key of our actor. The header lists the headers that are used for building the signature. And then finally the signature string itself. The order of the header must be the same in plain text and within the to be to be signed string and headers names are always lowercase. Yeah and that's it for activity pub. C maps, solid and activity pub are based on the semantic web which can also be called linked data. The semantic web is based on a standardized data format called RDF resource description framework. In the RDF world all data is expressed as tripoles composed of a subject, a predicate and an object. For example Alice is a member of virtual assembly. Let's make the analogy with human languages. They are structured by grammar and vocabulary. RDF can be associated with the grammar of the semantic web. Then on the basis of RDF we can define vocabularies called ontologies. Sharing a common language enables communication. The same is true for platform on the web. By sharing common ontologies different platforms can speak the same language and therefore communicate. Let's focus now on the per ontology which objective is to describe in a common way communities in order to facilitate interoperability and collaboration across communities. The acronym PER is for project actor ID and resource. The objective of the PER ontology is to foster the development of PER to PER networks. That could be called IPER web. Let's focus on the actor's category. The concepts of person and organization are linked by a semantic link meaning is affiliated to. The PER ontology allows us to say the person named Alice is affiliated to the organization named virtual assembly. Then we can see the actors can be involved in activities. These activities can be tasks, projects, events. We will therefore be able to say the organization named virtual assembly is involved in the project named CIMAPS. The parentology manipulates a number of concepts useful to describe the diversity of human organization such as people, organization, projects, tasks, events, skills, resources, plays, documents, finalities, challenges, time, etc. These concepts are logically and semantically linked all together. We will implement PER into CIMAPS in order to allow our user communities to interoperate their data and collaborate through the sharing of a common language, a common ontology. The parentology is published on the web and freely accessible to those who would like to implement it. It was very important to share this context so that you could understand the challenges of CIMAPS. Now we will talk more concretely about the technical specificities of the software and its main use cases. The main objectives of CIMAPS are develop a semantic toolbox, a linked data management system, deploy ontologies, functionalities and interfaces according to the needs of user communities, allow the easiest handling as possible. CIMAPS will allow to deploy services such as semantic mapping, collaborative knowledge bases, territorial and thematic information systems, decentralized social network, business applications. CIMAPS is ontology driven, interoperable, it allows complex requests, collaborative editing, authentication and access control. It is highly extensible. The first features we are developing are knowledge base, directory, agenda, project management, marketplace, social networks. CIMAPS is composed of three layers, the triple store, the middleware, the app. Middlewares are interoperable and apps can interact with different middlewares. CIMAPS middleware is based on the following protocols, link data platform, Sparkle, OWL, WebID, WebICL, Shaco, ActivityPub. CIMAPS middleware is based with Node.js. It is based on a microservices architecture we use for that molecular.js. This allows you to use only the services you need. For instance, the triple store, the LDP, the Sparkle endpoint and the WebID services. You can also add specific services. For instance, the ActivityPub services or other like a mailer, webhook or push notification services. We also develop connectors to facilitate the user of LDP, Sparkle and ActivityPub protocols on the interface side. We started developing connectors for starting blocks, LDFlex, RedX and React admin. We try to develop CIMAPS according to eight conception principles. Interoperability, modularity, generosity, adaptability, scalability, accessibility, opening and user friendliness. Going back to the beginning of the presentation, we hope you do now understand this CIMAPS, which are one of the results of the implementation of our work. We will now move on to the demos. The first one is about a project called Alcupelago. It is a CIMAPS use case for which we have implemented the pair ontology. I'm going to present you some demos about CIMAPS. My name is Guillaume. I'm going to first show you this instance of CIMAPS that we deployed for an organization that we are part of with the team of CIMAPS. This organization is called Geo-tural Assembly. As you can see, this is my profile on these instance of CIMAPS. I'm also part of another organization that is called Transition Pathways. It's an organization for which we are going to implement another instance of CIMAPS. As you can see, CIMAPS allows to describe organizations and their context. For example, here we can see that Transition Pathways has members. Among these members, there is myself. This interface is based on React admin framework that is developed by Marmalab, a French organization. We use it for different implementations like another one, which is called Pass-Ret Normandie. I'm currently in Normandie. This is a place where Leaves also had Rien that presented the first part of this presentation. We developed also another instance for an ecosystem based in Paris, which is called Grand Voison that gathers around 80 organizations. This interface is one interface, but we could use CIMAPS with other interfaces. For example, you can deploy interfaces in graph like Flowview that Yannick will present you a bit more just after. But we could also use other interfaces like the interfaces based on the web components that are developing our friends from starting blocks. For this demo, we are going to use an instance of CIMAPS that we are developing for the Troy University of Technology. This instance aims to make semantic mapping of the organizations which are involved in the renewable energies domain. We have for the moment five concepts. The first one is organizations. Then we have types of organizations, event, place and scene. If we click on each one of them, we have a list of each kind of concept. Okay, which is interesting with CIMAPS is that we are going to follow some semantic links. For example, if we click on virtual assembly, we are going to be able to follow a link that is normally named as partner. So for example, we are going to read this interface as follows virtual assembly has partners, data player, assembly virtual has partner, fabric, disenergy and so on. So it's not just an HTTP link, it's a semantic link. And so we can explore the graph of concepts that are semantically linked by semantic relations. For example, if we click on Pratsenair, we can see a description of Pratsenair and also identify some links that contextualize this Pratsenair node. For example, we can see that Pratsenair will participate, participates to a residence seminar, which is called Residence de la Fabrique des Energies à Prats-des-Molieux. And we can also identify that other organizations will participate to this meeting. Okay, this is the basis of this interface. We could also search some steps, but there is a little bug right now. And we can finally create some actors. We could create an organization like Inrupt. We will present Inrupt just after and add some data, for example, a shorter description, longer description. We could add a place, a kind of organization, for example, entreprise, business, company in French, some partners, for example, starting blocks, and some themes, system de formation or solid. Okay, and like that, we can create new data. CEMAPS is a collaborative link data management system. So for the moment, on this instance, there is no authentication or access control. But on other instances, for example, on Passred Normandie, we have here an authentication, also on Archipelago. And the objective will be to allow everybody to contribute together to this collaborative knowledge space. Okay, maybe it's sufficient. If you want to go further, we have a website which is called cemaps.org. Below, you will find our GitHub here, our matrix chat here. And you can also see the team, the governance, and some documentation around our project. And here, you will have other tutorials to discover and maybe install or contribute to CEMAPS. Thank you all and see you soon. I'm really happy to give you a quick overview of SparNatural. SparNatural is a tool which has built our friend Thomas Franca, he lives also in France, he's a friend of Assembly Virtuel. And he has developed this little application, which is a natural way, a visual way to build Sparkle queries. Those Sparkle requests, as you can see them on the right of the screen. I have a bigger window here. On the left is the concept and the information you feed, which will build the Sparkle request. So here, you see that I'm looking for an artwork. It's going to start again. I know it's in a museum in France. It's Le Louvre. It has been painted by a person, Leonardo Vinci, and also Michelangelo. And I know approximately the dates it has been painted after 15,000. On the below the screen, I see, we see the result and I can directly click to the Wikipedia resource. Here, SparNatural is connected to Wikipedia. We already connected to CEMAPS. Unfortunately, the website is KO today. But hopefully, in a couple of days, it will be back again. You can see it by yourself. Hi. Welcome to Flodio project. The aim of Flodio is first, to display semantic data in a network graph. Secondly, to help users to navigate easily into the graph using the flood concept. But what is a flood? Flood means fluid linked open data. It allows you to visualize data in a graph mode and in a page mode. For example, this interface is a cartography of CEMAPS data for the virtual assembly network. We can associate all the class of parentology with colors and easily display actor, project, organization, whatever. When you look at the network, it's really hard to analyze which node are linked with others. We are lost. Thanks to Flood concept, you will be able to concentrate yourself on a node and its neighbor. Let's see how does it work. When you click on the flood, the node opens and displays all its concept information. So you can easily read in a page mode all neighbors grouped by class and colors. When you click on the neighbors, you can navigate into the graph, jumping from flood to flood. Each time, the new open node takes the center of the screen. You can also search a node into the graph. You can pan or zoom. You can make a focus on a specific node. You can change the focus on another node. And return to the global network. You can also filter using class or properties. Example, if you don't want to see people, you can unselect person class. Hello, thanks very much to Semapps team for developing this open source, library and distributed Semantics server. I'm very happy to contribute to this project. And I like parentology. I hope the demos illustrated completely our work. As you can see, Semapps is a young project. We realized the proof of concepts in 2016. The development of the new version started just over a year ago. So far, we have laid the foundation for the project. We have several subprojects in progress, among which, at the core level, setup ACL rights, be able to link to Semapps instance, view user activities with activity pub, view the history of a resource, allow to create solid pods with Semapps, advanced search on Spark Natural. I have a completion on wiki data. For Aki Pelago, fully integrate the new version of pair register for an event, reserve a location integrating obel, solid plus XMPP chat, which Alice will present later on, interoperate Semapps with Mastodon, Piotr and Fediverse. At the interface level, view data in graphs, view data on a map, view data on a calendar, display personalized new feeds with activity pub. As Semapps matures, we are beginning to structure its business development. In this perspective, we are partnering with data players, a company close to the virtual assembly, which is the first one to bring Semapps to market. Our first clients are municipalities like Prats de Molo, universities like Technical University of Troyes, institutions like ADEM, the French Agency for Ecological Transition and the FABMOP, cooperative like Lacobes des Territoires, which I belong to, associations such as chemin de la transition, local low-tech or data food consortium. Hi, I'm Alice, community animator for the virtual assembly and starting blocks, and I'm glad to contribute to this video by introducing you the solid ecosystem. As we said before, solid is a project to re-appropriate the web by standardizing APAs. It's a set of specifications that allow enteroperability and give power back to the user for a more efficient and democratic web. The solid community is worldwide. The presentation of the ecosystem that we will try here is by no means exhaustive. We will only focus on few actors in order to give you a sample of the diversity of actors interested in solid and their use case. Inrupt is the American company that is seamlessly set up to develop the economic system of the solid projects. Inrupt is interested in the medical files, banking and more. The first use case interesting to share is the adoption of solid as a BBC. The goal is to develop user-stone-trink and constant-based content recombination with solid. Imagine what your BBC news feed would look like if it could be customized with historical data from other platforms such as Netflix without owning it. Another example is NatWest, a UK bank that also works with Inrupt to streamline digital interaction associated with customer life moment, such as name change at a wedding or restoration of a new business through anti-operability. Inrupt is the largest and most active company on the solid project. Starting Blocks is a cooperative that develops an open-source technology in order to quickly build solid applications at lower costs and with the least possible skills. As a cooperative, we strongly believe in community logic and the decentralization of power through the reappropriation of our digital tools. For us, open-source is central and radical because we want to make the development and the use of solid specification as easy as possible. This is the reason why all our production are under MIT license. We believe that this fight for the reappropriation of the web is a major societal issue. That's why our goal is to drastically reduce the development cost of application to all the greatest number of people to regain control of their digital tools. Our secret source is the marriage of web components with solid. We believe that these two standards have for us the vocation to work together. Indeed, solid standards allow us to reuse a component without having to re-adapt it to a new data format. Our first use case has been to equip the freelance community we come from, Happy Dev. Happy Dev is a freelance collective that actually composed of several small collectives that wanted to work together and share the networked effect without losing their autonomy. It's for Happy Dev that Starting Blocks developed Hubbell, a federated application that includes a chat, a job board and a skill directory. Hubbell's authority collective are already federated with the other instance of Starting Blocks. Contact us if you want to visit us there. Another use case we have is a European trade union. Trade union are independent entities that have difficulties to federate and to circulate information in a fluid way. We have developed a federated application for them that includes a collaborative intelligence tools, decision-making tools, even sharing and we have taken over the Hubbell directory as well. The objective is to improve cooperation between union to enable them to unit their string while maintaining their autonomy. We would be very happy to introduce you to our techno and to share our thoughts with you. Do not hesitate to send us a mail at salutatstartingblocks.com. Hello everyone, I'm Quentin, co-founder of Data Village, a personal data science company based in Belgium and our mission is to unlock the value of personal data for both the organizations and their consumers according to three immutable values. The first is people control, the second is privacy and the last but not least is full transparency. Most of us daily use digital services which means that every day we create a stream of online activities. These are all data and this is a never-growing database of personal information. However, the data economy model of today is wrong, the data is locked and the market is also locked and organizations are pushed to collect more and more data. Personal data is spread and duplicated making it possible for people to keep control. On the other hand, the interest of organizations to use personal data is growing especially when it comes to hyper personalization, augmented experiences, insights, surveys and so on. Now imagine we shift this model, a new model in which consumers could get a view of all their data and control them. Imagine that as an organization you could just pick and choose this data and extract insights in a minute with full privacy and without the burden of managing that data. We make it a reality by allowing the consumers to create and control their own digital twins, meaning a virtual representation of themselves based on all the data they create on a daily basis. Then organizations can use our so-called data passports to query their consumers' digital twins and acquire the insights they need in order to improve the digital user experiences and be able to reach the level of personalization of the internet giants. Along with the data passport, organizations can use our personal AI data cage, which is a confidential computing environment to perform AI data enrichment and matchmaking between their products and any personal data to extract derived data without breaking privacy. Our decentralized technology can serve many different sectors and help solve challenges such as better access to information, better health, better finance management or mobility among many others. We are Data Village. Thank you for your attention and don't hesitate to reach out at contact at datavillage.me. Thank you. Thank you. Hey everyone, my name is Jumein Itzma and I'm the CEO of Antalla. Antalla is a software company located in the Netherlands that focuses on building stuff with linked data. We've got an eDemocracy platform called ARGU, which is all about civic engagement, and the next year we'll be working a lot on solid projects. The first one will be the DAXPOD. This is a solid server built with Ruby on Rails. It's currently in development, but it will be open sourced in March 2021. Solid Search is our next project. This is going to be a search engine plus UI, which is modular and designed to be compatible with various solid implementations, including the community server. This is expected to be released in the last quarter of this year. And the third project is data deals and payments, which is all about making machine readable agreements between people who share data and who use data and enable that process to be monetized, powered by solid. This is also expected to be released in the final quarter of this year. If you've got any questions or want to just talk about solid stuff, get in touch. Bye. Thank you for your attention. As you see, we need some contributors to popularize our subjects. Hope you've liked this video and our authenticity. Let's get in touch.
|
SemApps is a collaborative, interoperable, generic and modular knowledge management system : Based on linked data & semantic web technologies and the SOLID specification, it allows the co-production of knowledge graphs. Built on open standards, it enables the development of interoperable information systems. Designed on a modular architecture, it gives everyone the opportunity to build and customize platforms on demand. SemApps is an open-source software project whose development is coordinated by the Virtual Assembly (VA), a French-based non-profit organization bringing together about fifty developers and activists. In a context of strong fragmentation of dynamics in the field of transition, VA aims to develop commons (digital tools, methodologies and projects) to promote the interconnection of transition movements. SemApps is one of the Virtual Assembly's core projects. Its mission is to foster interconnections between communities by creating synergies between their information systems. SemApps is a collaborative, interoperable, generic and modular knowledge management system : - Based on semantic web technologies and the SOLID specification, it allows the co-production of knowledge graphs. - Built on open standards, it enables the development of interoperable information systems. - Designed on a modular architecture, it gives everyone the opportunity to build and customize platforms on demand. Its code is released under the Apache 2.0 free license.
|
10.5446/52244 (DOI)
|
you Hi everyone, I'm Nicolae Gay. I work at Protocol Labs and this talk is about runamess and specifically DRAN and the legal photography which I talk to you about. So first I want to talk about why do we need runamess. So there's plenty of places in the real life and in the digital life where we need runamess. For example, you realize it could be in lullaries, in jury selection, even in election events audits. If you don't know if you recall some recounting events that happened in the last elections. In protocols and cryptography we need that a lot. For example, I'm going to talk about more about blockchains. Why do we need runamess and blockchain in the next slide. About parameters for cryptography, for example, we need to generate runamess to generate different parameters for the prime number for the protocol and things like this. Selecting a field for RSA. When you do signatures, you need runamess for DfM exchange, for statistics as well. There's runamess in many, many applications out there. So what kind of runamess do we need and why do we need good runamess. So why do I need runamess? I'm going to define this in the next later on. Just want to highlight here the problem with getting runamess here is that it's very easy to get runamess. Funnple in the history was rigged lotteries in the US where an insider got access to the code and were able to rig more than 14 million dollars. Just because runam number generation was not completely unique. For another example, if you heard about the DRVG events where there was a high suspicion of crypto parameters for a new set of electric cures being suspicious and we didn't know in the community, the community didn't know whether it could be used, these numbers could be used in certain ways to break the scheme. And there was a lot of buzz around this thing and it happened but with some Snowden release and some New York Times article that the suspicion got confirmed. There are plenty of examples of bias runamess and non-uniform runamess in the literature which you can find. So as I said before, what kind of runamess do we need? So in this talk I'm going to talk about mostly a specific kind of runamess which is publicly verifiable. So what is publicly verifiable means is that anybody, any third party can take the runamess and verify that it has been generated correctly. I also want the runamess to be unpredictable, of course that makes sense. I want it to be biosalistic so basically it should follow the uniform distribution. I want it to be available. I want to be able to fetch runamess at all times. If not my application can work reliable and I want it to be generated in a decentralized way. I prefer that the goal of the runamess generation is not generated in the central point. So what exists out there? Currently we have some runamess generators out there that fulfill some of the criterion but not all. For example the most well known is a NIST runamess beacon which is based on quantum entanglement so it's completely random. It's unpredictable, it's consistent but the thing is we still need to trust NIST to deliver runamess to us and there is no way we can verify the runamess has been generated correctly. So that's a problem so it's difficult to trust NIST obviously. There has been some attempt to generate runamess from the Bitcoin blockchain. It's a promising attempt but it's difficult to say there's no formal proof of security depending on the framework which you want to and Bitcoin is also, some could say it's quite centralized and quite, there's a lot of ad-hacks on Bitcoin as well. A few years ago there was a paper written by Philippia Van Amic at DADIS in the DFL called Van Houdt which was scalable, was bi-assistant, unpredictable, publicly verifiable and decentralized so kind of feeling everything. But it was a very complex beast, it required multiple rounds with the client and the servers. It requires a tree formation of nodes and trees really difficult to maintain so I'm going to talk here about the solution but doesn't require that is simpler and faster than Van Houdt. One last slide about runamess in blockchain. Where do you use runamess in blockchain? We use it for leader elections so at each epoch there's one miner which is going to create a block, put all transactions inside and deliver to the network. How do we know which miner creates a block? Well in Bitcoin it's proof of work so we run a very heavy computation so this is like very reliable, it's been running for more than 10 years now but it's quite expensive and very inefficient computationally. It's quite centralized actually, more than we think. So the next generation of blockchain now trying to use over technologies, one of the most popular one in this so let's say we run what we call a verifiable runam function where we just kind of a signature from which we can derive runamess but the thing is it's quite recent construction like this and many construction are actually biasable there is like some blockchain without using this construction but they have a very very high finality like two or three days and it's also runamess which is very tied to the application itself so if you need runamess you need to connect to this blockchain and get everything that block headers and PNAB so it's quite heavy for application to use. So in the future of runamess also a lot of people believe that it lies in verifiable data functions which basically you make a computation but takes a lot of time to derive so if you're not happy with a route and you want to generate a new one then you won't have enough time so your result is runamess actually but it's still not deployed in practice and we still are a lot in the research area on this thing and this is actually a really huge topic runamess in blockchain because this is what one of the reasons why it is to take too long time with verifiable runamess. So there is another solution to generating runamess which is for short capability for those who don't know what it is for short capability allows to decentralize many applications that we currently use for signing and encryption. So the main idea in this line of thought is that any TRN participants are required to create a signature or encrypt a message or decrypt a message so take a subset of nodes and you need at least any T out of them. So for example in this example you can see that we are in the scheme 3 out of 5 are required and we have the node 1 and 3 and 5 which are participating to generate runamess. So I'm going to explain in this talk a little bit in the math or it works it's not going to be hard it's going to be quite easy math but just interesting to see or this works. This is a threshold cryptography usually works in two kind of step I like to divide them in two steps there is a setup phase there's always a true setup phase you can figure it as a trusted setup and the complexity is square. So this is like you do it once and then you forget about it and then there is a actual signing or runamess generation where it's very lightweight. So we're going to see how these two phases work a little bit more in detail. First I need to recall some basic math and don't worry it's quite easy so I think everybody remember what a polynomial is. We have coefficients so here's a freak coefficient is a secret s and then we have a1 a2 a3 etc until t. So the degree of this polynomial is t and then a share in the Shamier secret sharing scheme is simply the evaluation of this polynomial at the point one for the share one for the node which is indexed by the index one f2 so the node is equal to the second share and etc etc and then from all these shares we can take any t out of them okay and we run through a think of it as a black box that aggregates them together and refines the secret s that we had at the beginning okay. All these shares are individually at download, we will any information but when you put them together they are like origin interpolation then you can find again the secret s okay. So this is like the basic Shamier secret sharing which is used already in many for example wallet application when you can you have your wallet key where that holds many bitcoins or a fear and you want to secret share to friends or family so even if you lose your wallet you can still be created later on this is what gets used here. But let's go one step higher now I want to talk about distributed key generation in the previous slide the problem is that there is a dealer that needs to create this polynomial and needs to create this s here so this secret is known by somebody and this is a problem in our application because I don't want anybody to know the secret key I want it to be decentralized nobody knows the secret key everybody has a share okay. So how the video is it's quite simple actually we run this secret Shamier scheme n times so we have n nodes so each node creates their own polynomial so we have the first node that creates its own polynomial f1 which is its own secret and its own coefficients the second node which creates the second polynomial which is its own secret and also the addition etc until the end nodes. So each node are going to create also n shares as before as a secret sharing scheme and all the nodes are going to do the same in the end simply enough one node from the polyfine the first node I'm just going to add all the shares I received for myself okay so f1 1 f2 1 fn and 1 okay and this is going to be my share of a secret key which nobody knows second node is going to do the same it's all going to add all the shares etc etc and the secret key is the addition of all these secret shares here but nobody knows it nobody computed it thanks to the threat model because we assume there is at least t on s nodes which are not going to color it and and share which are there so thanks to this we know of each node that have their own personal share which when we combine them using like washington interpolation similar as here then we can recreate the final secret but we don't want to recreate the final secret it's no use to us what we want is just generating run on this okay so how do we once we get the secret key shared how do we generate run on this with it so here I need to pass to the run on this generation phase into a signature scheme so basically we are using plc nature so again this is a bit more involved but i'm just going to highlight how it works basically if you have a secret key you can generate a plc nature very easily over a message so you hash the message it takes chat 256 for the ball and then you explain it to your secret key okay and then this is your signature and then to verify you need the public key okay and you need the signature and then you just run this equation and if this equation verifies then you are sure that the signature has been made by a person that has a secret key over the same message and this bs nature is actually quite very powerful construction because it can work in the fresh work setting so instead of having one signature with a secret key here we can put a share here so s i becomes a share the same share that we had generated before in our dkj so everybody create these signatures with their shares okay and it's what i call partial signatures when we have these partial signatures when we have t r of them then again we put them through like large interpolation the same way as before and then now we have this final signature exactly i fit it was one person that generated the the signatures so how does this work how does this help us for generation run as well in fact bs signatures um it turns out if you hash the output of the signature so if you actually signature here then you get random output and which is unpredictable which is by interest sense which is decentralized which get older the properties that we need and this is basically in the math uh land how the partial cryptography uh applied to the run the generation uh works and this is now what i want to present to you d-rand which is exactly what is uh which is a software written in go uh and it's doing exactly what i've just presented to you so d-rand is a decentralized run on the service okay you definitely to think of it as a software which is run by n parties they do a setup phase and then regularly periodically they're going to generate run on this okay it's using uh a curve which is known by many next generation five ones such as if i'm if i'm two and five one for example um it has a lot of features uh it exists since uh um two years now uh it was already developed by uh uh in uh dedis and i started it in the when i was still working at tpfr university when i continued when i'm working at protocol labs um it's a software which is stated already then deployed i'm going to talk more about that and it's quite simple very simple to use okay so just if you do this curl uh comments uh then it's going to return to your jason response to which is going to talk about the round the randomness the signature and the percentage i'm going to detail all of this in the next slide okay um so how does steering works not on the math level but on the architectural level uh well we have the the random generation so our imagination is quite it's very simple uh as i denote every 30 seconds or every minute i will um i will send my partial signatures uh to uh over notes everybody will send their partial signatures uh to over notes and i'm going to uh let grunge interpolate them and to recreate a final uh the other signatures and this family of the other signatures is what we see here signatures and the randomness is just the hash of the signatures so it's pretty easy and how do i make sure that the runners has been generated correctly well i have the public key of the network okay there is this setup phase which i talk to you about this distributed future generation this is the setup phase i have the public key the public key simply um determine it there's a yeah all the polymers here i don't want to get too much into detail but during the setup phase you have the way to determine which public key corresponds to this hidden secret key so i just need to refine this nature it's one vls signature check and and we could see how does the protocol works is how do nodes know which message to time when basically we have a we we form a runners chain so uh i will each node will sign the previous one on this and compare it with the round okay so hash of the round uh coincide with previous on this here and they will make the signatures and they go get this and when do we sign it it's uh it's uh we have been during we have this time to run consistency so i can know exactly from the genesis block to if i have the genesis block and the period time i can know exactly which one is going to be to is going to be signed at which time okay so times first we started at zero times zero and we have run one time 30 one two etc that i'm 60 seconds one three etc um so vselozo to deterministically know which one is going to be used uh at which time which is very useful for application using one message we're going to see later um now i want to talk about uh who is using uh diren so uh since last year actually we ran the since 2019 absolutely two years now we started um to run the network of these nodes of these diren nodes we've diversified and decentralized and many organizations so uh we are currently we currently started in uh with 10 members and we are now 20 nodes and we can see we have nodes around uh the whole world so we have cloud fair we have protocol apps we have epi for which is switzerland we have selo we have kudoskyl which is also in switzerland we are new city the chile we have ucl uh in england we have some university as well epi for in in one ucl as i said we have change safe so it's growing a lot more um and we are trying to create this what we call the league of entropy so this decentralized uh network of uh independent organization which deliver randomness uh to anybody what wants it so the goal is to really really be a randomness as a service uh think of it as we have already dns servers which uh it is a highly source of highly available source of your naming information we have ntp for the time we have certificate authorities we have certificate but we need something for randomness as well um and we believe that uh diren and the league of entropy can feel that role and it's starting to grow big uh and this is why i'm happy to present to you um and so how does uh the league of entropy operates um so we have now a kind of a production ready network which is used by uh some uh multi billion stake uh company which i'm going to detail later on uh so we have a highly available and ddrs resistance network where we separate the d-rand randomness generation from the distribution network so if you want to run this you're just going to touch a gtp endpoints and you're not never going to touch a actual d-rand node which never really run on this okay um we have many uh we have a diversified distribution network so we can you can fetch one on this via sctp uh via lippie2p which is useful for blockchain for example uh park 183 i'm using it um so it's called gossiping uh so via sctp you can put a scp load balancer as well uh we have uh for fun we made a twitter distribution with twitter bots which bots every which tweets every 30 seconds of the new randomness we have a gtp accessible via tor as well the codebase is audited the report is public we have a continuous health uh monitoring enabled so as soon as there is one small incident on on the network we are aware of it and there is a there's a incident protocols to respond to to any kind of incidents coming in and we have some governance model which is uh let's say semi-permission so our goal is to have a set of participants as large and as diverse as possible so what does it mean exactly it means we need to have a different geographic position different jurisdiction different interests as well we don't want all the old blockchain companies or only uh all universities or um we want also different uh provider for example we don't want everybody on aws for example and as you can see right now the the network is doing pretty good so you can see that we are 42 concert in the us but we also have 35 percent Switzerland um we have uh most of all nodes are on premise and only 45 percent is on aws and we try to keep the different jurisdiction like this because obviously if more than tenodes uh so we are 22 right now and the threshold is set to half so if more than 12 nodes are down at some point or unless they're comfortable so we really need to uh we really need to have a very diversified and large network uh so application to uh become a participant is open uh there are some critters to be uh eligible and current members vote on new members every quarter so we have a next batch coming uh quite soon I believe um and every quarters we refresh the shares if something I didn't say before I don't want to get too much in there but every node has a share and every quarter we refresh them so even if some um share is compromised somebody has nicked the share out of nodes then at the next iteration the share will become invalid so this is very important for uh security because we don't want to keep the same uh the same key always so we are refreshing the shares every quarter as well and now who is using DRN and who is using the legal entropy and we um have a 5.1 is our first uh is the largest production grade consumer of the first um and so 5.1 is a uh is a is a is a I'm working for for for 5.1 and is a is a storage um based blockchain uh basically the more storage you have the more uh blocks you can produce um it's like a proof of state but with uh with storage and uh you can store actual files on it so the idea to be uh to be or 5.1 would be to be an incentivized uh decentralized storage network and uh 5.1 is using uh legal entropy to uh for its randomness to date so basically exactly as before it's using it for leader election so every third second there's a uh each minor run um run of their own leader election they hash the randomness with their own private key they have a very valuable random function and if the result is is lower than a certain threshold they they are eligible to mine a block and just to show you that it's uh actual uh randomness of the current um network which is uh being used right now I fetch from two girls come and do one from run from uh 5.1 uh so this is just the cid here it's just the uh the cid of the tip set so it's like the different blocks at one epoch and it just took the big unentry so this is like the randomness which has been used at this epoch and if I'm taking the same uh if I'm fetching from directly from the diran network I used this one number because I knew that this one number was corresponding to this one less and I and I look at the response it's exactly the same so 5.1 is already using we have different uh also clients uh we have tarion which is you know before time stamping and um and other things uh but was mainly my uh talk and if you want more information I'm happy to answer any question and uh you're welcome to visit our website diran.love which contains all information about the diran the project itself and the league of entropy uh as well which was the first website launch in 2019 when we first launched this uh this network uh thank you you you you hi i'm not sure um are you hearing me um so there's a question here if I understand correctly if I create a matrix box that will relay random numbers obtained from diran.sh we can publicly we can be publicly refraigable so that's uh that's correct uh you don't need to trust anybody that puts uh the the randomness somewhere on the platform you only need to trust the the distributed public key that the diran network has created in the setup base and is re-sharing uh so this you can you can fetch uh you can fetch via the urbl which I can give you you can see in the responses on that uh there is the public key here and this is what you you can use to verify any randomness outputted by the diran network so this is like the root root key the root of trust of that you need for using diran you you you you you you you you so to compromise the root key what you need is to compromise more than half of the nodes so basically the threshold is set to more than 50% so this is why in the video I said we are currently 22 and we need to compromise at least 12 nodes so 50 plus one and um afterwards uh once you get this this number of nodes compromised you can you can actually recreate the the private key so then you you you can pawn the the network so this is why we put a really high uh uh importance on the setup of each node on the decentralization of each node and they are all uh diversified even even the the setup scripts we didn't share we like uh then when we let each operator to run their own setup and then we try we test them we actually uh run scripts on these nodes to make sure they are well secured and things like this obviously we cannot do everything from remote point of view but we try to do our best to secure each node there and the more the network grows the more the threshold we grow so the more the network goes the more it will be secure uh we respect to the compromising of the secret key uh you can try to boost up your own network um I'm not sure this is I mean you you're welcome to do it to to to play around whether this is encouraged or not is a bit like the same um similar to what you would what you would say for torn network from the pool uh many people say oh I want to talk to have these future and so on and I will try to run my own network it will be it will be better and things like this but in the end the torn network is what it is because it's becoming huge now and it has a huge anonymity set and this is what makes it secure this is what makes your uh your communication actually secure so if you are doing this on your own for different network then you also risk to have your nodes more easily compromised so it all depends on your threat model um whatever you what your application is um and in the end we have put a lot of time into securing the network as well so it's not like uh during the shift towards the software itself we can run on any server but the the whole setup we have high key white listing we have the this front end we have the the cdn to revoid the dos uh we have all this uh this uh this script to deploy uh easily on on on the any uh platforms uh so we I would say you you could but it uh it would be it's it's difficult to argue but the sequel would be more secure so so the condition is to become a node in the network you need to have a need to be able to commit to a certain level of viability for all parents like you need to be able to be reachable uh kind of 707 24 24 so basically the nodes that are running currently are basically backed up by a team of two or three people which we know their uh contacts and we can uh quickly call them in case something goes wrong with their nodes um you need to have a uh a well uh deploy infrastructure where you the everything is logged on your servers uh everything is I mean every access is logged the secret keys uh is well backed up um things like production ready um servers so I wouldn't do it if I if you are an um let's say a single operator um this would be a bit difficult to to to maintain uh alone but uh let's say that we are trying to open it to as much as possible people but we are trying to keep it as well the level of the of the quality of the network to as high as possible as well so it's a tradeoff between uh open to everybody and and the quality of the network because uh we want availability as well uh we need to always have a threshold of numbers running because now uh some production uh network uh is depending on d-run so for example 5.1 is using d-run right now and we can't have the not one of the stop for for a while um yeah you can find the condition actually on the on the league of entropy and uh you can you can contact the the mailing list we have a mailing list and you can you can ask if you can join and then we can give you an exact set of the we have a whole uh google work on the exact set of criteria that we are looking for and um yeah you're welcome to to send us an email. uh Yeah, that's a good question. So currently the node itself is only written in Go. We have a prototype in Rust, but it's not been updated for a long time, and we need to put it up to work on it again. The thing is we pushed a lot for having a production-ready Deerun network around last summer, so the first block was launched in last August, and because we needed to launch a filecoin afterwards, and then we worked a lot on the filecoin, so we need to get back to work with Deerun now. But on the other side, there's multiple clients implementation that can verify run-less, and especially there's a latecomer in the game which is compiling a verifier for Deerun in Wasm, and it's actually being developed to be used in Wasm in Cosmos smart contracts, so you can actually verify run-less in smart contracts on Cosmos pretty soon, I think. Thank you.
|
drand and the associated league of entropy network is delivering periodically unbiasable and verifiable randomness over the internet. This talk presents how drand works, what is the current network, and the applications that can be realized using it.
|
10.5446/52246 (DOI)
|
Hi folks, my name is Dmitry and today I will be speaking about the need for secure composition and the reasons behind the Echo Marine. First of all, how we got there. We want to enable open applications. This is the new concept of removing security perimeter around the services and having microservices open for composition, for making new applications, composing new applications from running services using both code and data. And to do so, we need to decouple ownership or hosting from authorship from the authors of the code to enable new business models and to incentivize developers including the open source developers, which is important how we approach that. We consider applications as experiences and these experiences are delivered as front-end products so that you take a product and you interact with it and you have an experience. This experience is composed and brought to the user, is composed from function calls. Experience is a code which resides somewhere. It could be grouped into domain-specific microservices. It may be authored by different developers and it is running on machines on certain peers and these peers could have different owners. They could reside in different data centers. You may be a peer on your own or you can host your own machine or you can use some shared resources and so on. So we want to make an open network from these peers and it means that it needs to be self-coordinated. The peers are of the same rank. We want to get rid of gateways and controllers as they make a difference and actually they constitute this security perimeter and it disables the composability of microservices as they are hidden behind the gateway. And also we want the self-coordination to be efficient. That means if we can get rid of controllers then we don't need to have round trips between service nodes and controllers. Each node, each peer can take the full responsibility for choreographing itself or managing and coordinating itself. And to do so we want to have recursive routing which means that there is no need to send a knowledge back to the pre-questing node until it's explicitly stated that the technology is needed. So this is about approach. And what we did is a ecumrain. It's the name of the composability medium which is a part of the Fluence Network. It's a new language and protocol to compose distributed function calls and to enable open applications. So my main focus today is the language and why do we need the new language. So the problem for the language and for the runtime for the composability medium can be stated as following. First of all, you want to call functions on different peers and to do so you want to describe how to get from one peer to another, how the control flow should flow. So this is about the pathology and the network effect, how the control flow goes from peer to peer. Second on every peer we want to run some computations and we want to use outputs from one functions as inputs for another functions. And we want to do so in secure way. So it's about data and about the computational effect that happens on a single node. So at first let's take a look on what happens on a single peer, how the peer could approach it. So when control flow is moved to a single peer, what it can do? It definitely can do all the computations that could be done locally. And what it cannot do locally, it needs to delegate to another peer which are responsible for the subsequent computations. So we can describe a single processing step like the following. We have some incoming data package or network package, calling some code and some data. This data is used locally to produce data prime and we also have the code that expresses what computations to perform on this node and what the pollages should be involved next. What are the next peers should take the data prime and do their job. And it looks like functional languages where we have the effects and data and we can treat them separately. So the data is being modified on a single node, on a single peer, on each peer. And we want to have some security invariance. Like we want to prevent from man and the middle attack so that data can't be modified. We want to standard play attacks. We don't want to reach conflict state of the data. So data should be conflict free, replicated data type, CRDT. And this is the invariance which are held with the help of a single peer. Details of how we get it beyond the scope of this talk. But let's assume that we just have it. Then let's consider control flow, what it could look like, how we could model the network effects and this moving of execution from peer to peer. Design properties for this model first is simplicity. As distributed networks are already such a complicated topic, it would be nice not to introduce too much complexity for network effects. So simplicity is a must. But even having it simple, we want to have it flexible and we don't want to introduce unnecessary restrictions on the use cases and apologies which we could use. And finally, if we want to have open applications in permissionless networks, we need to take care of security. At least we need to keep the invariance which we were described before. And it's better not to introduce too much attack surface as the network effects. And here we have P calculus, process calculus. It's a theory that expresses how the control flow, how the execution goes from process to process, process being either local or network process. And it fits perfectly for the non-deterministic network behavior. And by the way, real world is non-deterministic, so it's also nice. Let's take a look how it fits our desires, design properties. First of all, P calculus is simple. This is the complete list of operations that we can do. We can send a message, we can receive a message, we can do combinators like do subsequent computations, co-product, product, parallel computations. We have some branching support with a match and mismatch. We can introduce local names, local variables. And we can do replication, which means that there are resources in the network which are not consumed with their use. Like if you call a function, it doesn't disappear after the call. And thanks to the theory, we can be sure that it is enough. Enough for what? For flexibility. With P calculus, you can do a lot of things. You can describe very complex systems. Starting with a request response or another control flow for a single action, like a controller, or you can do publisher, subscriber, broadcast, gossip for a network as a whole or for a subnetwork. It's possible to implement cademlia to describe cademlia and P calculus and other eventual consistent algorithms. You even can code and describe and reason about strong consistency in terms of P calculus. So P calculus is the foundation for coordination networks development. And it doesn't limit us on its own. I think it is wonderful. And finally, P calculus is secure. Well, security is not a specific property of P calculus, but it could be derived thanks to simplicity and a very strict composition capabilities. So if we have security considerations for all the operators that they retain signatures on data, that there is no way to fake anything and so on, then by induction, we can assume that the complex scenario built from these operators is also secure in the same meaning of security. But for the open network, it also depends on the lower level of security. To stand the eclipse attack and civil attack influence, we have a truss graph for it, but it's also a scope of this talk, but we have it. And that's the reasoning for a covering. A covering intermediate representation, the low level language that we use to describe composition and control flow between the peers corresponds to P calculus. It's just P calculus put into practice. It's not the complete list of operations. And we have a match mismatch and similar things. You can take a look on it in documentation. But this is the inspiration for error and the main reason. And this makes it possible to factor the open application into two stacks, an internet works stack, which is a handled with a equipment and low level representation, the code representation for a command is error and computation stack influences handled with a fluence compute engine that runs WebAssembly and we have the low level representation of code meets wasm. On each node, that's about the protocol, how kumarin is processed. A current processing is structured with particles. Each particle contains some code. It's an error script and data. When this particle comes to appear, all the proofs and security guarantees are verified on this peer and error script is interpreted locally. According to P calculus based set of rules, services are called within WebAssembly runtime on the local node. This takes data as input and provides the modified data, which contains the new proofs for the new outputs. So it's signed, both inputs and outputs are signed by this peer. And finally, the particle is sent to the next peers who needs to process it. And the next peers, the list of next peers is derived from the P calculus based set of rules from the error script. The question is, can it be another language? Why error? Why not something else? Of course, it can be another language, but in any case, we need to consider this duality of having network specific control flow and special reasoning about control flow security. And we have the computations which could be performed locally with their own security model. So you can take any language and split computations with a control flow branching or moving, compile all of these into low level representation, enter error for control flow and WebAssembly for computations and use it to form a particle to send it to the fluency network and to have everything secure, flexible, concise and nice. So to conclude what the practical consequences. First of all, composition in an open network requires security to be taken into account and it's really important and without it, the open applications are not possible and composition cannot be secured. Equal Marine is a P calculus based approach to composition for permission less networks. It's very flexible, yet secure and open applications, meaning this removing of security perimeter and so on, can be built with WebAssembly for computations and error for control flow. Thank you for your attention. We have all our code open in Fluence Labs organization. Here's the link for Equal Marine repository. Take a look on this index, we have some docs and examples. Thank you for your attention. Bye-bye.
|
Aquamarine is the multi-process composition medium based on pi-calculus, designed for distributed applications/backends, both in private deployments and open networks. Aquamarine scripts define the topology of the execution (when and where to move control flow) and data dependency graph (what results and what arguments to pass where), essentially describing the composition of (micro)services, e.g. using one service's output as another service's input. The language primitives are based on pi-calculus operations describing certain topological effects and secured by cryptographic signatures of involved peers. The Aquamarine approach allows building distributed systems of any complexity, effectively expressing network behavior.
|
10.5446/52248 (DOI)
|
Hi everyone, my name is Pavel. I'm a frontend developer at Fluence Labs. And in this talk, I will be showing you how you can build frontend applications for Fluence Network. So my talk would be primarily focused on showing you some code in TypeScript and React. So let me just go through the slides real quick and get to the code as soon as quickly as possible. So in case you missed the introductory talk, what Fluence is, it is an open application platform. It is based upon peer-to-peer network with distributed compute protocol. It also has Okamarin language, which is used to compose applications together. And it will come with a blockchain economic layer a bit later. So speaking of application compositions, this is a wonderful feature of Fluence. Basically applications can build upon one another. They can share data, they can share users. And it is a really wonderful thing. You should definitely check out the introductory talk in case you missed it. So I will be mostly speaking about the frontend applications, which are running browsers. So from a frontend developer perspective, Fluence is a little bit different from usual development. The main difference is that with Fluence, we have the whole peer-to-peer network exposed to us. Usually when we develop web applications, we use HTTP requests and sometimes web sockets. And I'd say 99% of the time we have a single host, which serves all our requests. But with Fluence, we are free to call many different services, get data from them and compose the data. And we're using Okamarin and Particles for that. That would be the biggest difference. And I really would like to show you this in code. Welcome to the code demo. Unfortunately, I'm very short on time, so I won't be able to do live coding. Instead, I will be working you through a series of Git commits showcasing you how you can build a web application for Fluence network. It will be a step-by-step presentation starting from an empty Creator Act app all the way up to the fully working collaborative text editor. So I've already removed everything from the root component of the React app, and I've added some styles. I will be using during my presentation, so just to spare some time writing all of them by hand. So we will be using some NPM packages. First of all, we will need Fluence.js SDK. Well, obviously, since we are working with Fluence network. Another Fluence-related one is this network environment. Basically this is the place where we are keeping an up-to-date list of known Fluence networks and nodes in them. We will need this information to connect to one of the nodes. Auto Merge is a very nice CRDT-based library for various JSON format document synchronizations. We will be using it for text synchronization, but it can actually work with various JSON documents. This one will be used for gluing some plain text to auto merge document format. Okay, so let's start with our initial app structure. We have a header with two header items, and we also have a content area. We are not using any fancy routing framework, just a couple of statements. So when a user is not logged in, we are displaying just a simple welcome form, and when a user is logged in, we are showing the actual application. The actual application consists of a user list and the text editor itself. The user list is pretty simple at that point in time. So we are only showing our own name here, and text editor is very basic as well. So we are only using a single text area, and we are just wiring an on-change update handler. So here is what it looks like. Let's say I'm a kitten. I can get into the room, write some text, and that's it. Nothing happens. So we should definitely work towards a more interesting application. So we will start by utilizing Fluence. The way you do is by using Fluence package, and you should start with the creation of Fluence client. Here, we are storing it in the use state hook, but you are free to store it wherever you like. For example, you can store client in a global variable, or if you are using a dependency injection framework, you can store it there. We will be just initializing the client when the application renders. So we are creating it with createClient function. And the argument here is a name or, I'd say, the multi-address of the any known node. This is where our environment package comes in handy. For example, in our application, we are connecting to the dev network and to the first node of the state work to be precise. So let's do something useful with the client. You can check the connection state using this isConnected property. And let's just show the connection status inside the header. So if we refresh a page, we see that we're getting connected just as the application renders. Okay, let's make our first call to the Fluence network. So I've created the file API, where we will be storing all, let's say, API calls to our application. And I've wired up a couple of API calls to button handlers. Yeah, and I should really have to mention that this way of using login form is oversimplified. In real world, you will be dealing with private keys and wallets and stuff like that. So I don't want to do all of that during the presentation. So let's just stick to a simple form where everyone is welcome. You can just say your name and you can get logged in. So we will have to call a user list service telling it that we are getting logged in. So the way you work with network calls in Fluence is by utilizing things called particles. You create it. You have a particle class and you create particle with this class. And you can later send it with one of the send particle functions. So I really have to raise a couple of very important points here. First of all, don't really bother too much with all this scary text. I'm just focusing primarily on the front end side here. And I will be explaining you in simple words what is being done here. Second, this is actually meant to be a compilation target rather than an end user language. So there would be a lot of improvements in this area. And I don't want to get too much into the details here just because we will be having two more talks where all these details would be explained. So I will just give you an analogy of what particle is. Basically with HTTP calls you're on the query single server and the URL and all these parameters in the HTTP call describe what do you want from server and the server responds with the data. But with Fluence you can query any node from your web client. So you might think of it as an exposed facade, for example, where you can query any microservice just from a single web application. So that's why you might think of this script as an alternative to URL. So what we're doing here is we are describing what we're actually doing with the network. And to be precise, we're just calling a user list service, which is located on user list node. And you should definitely pass these things as parameters. Think of it as a prepared statement. So getting back to analogy with HTTP, you might think of a particle as an HTTP on steroids just because you can query multiple servers and get multiple results. And it is actually very powerful. So knowing all that, we've created a particle and we're sending it using send particle as fetch. What it means is we are waiting for a particle to get back with the data and dysfunctional return a promise, which will be resolved with receive data, hence the name send as fetch. Here we are just checking the return result and the leave call is pretty similar to the join. Instead, we're just calling leave instead of join. So we are taking advantage of the fact that there are promises, so we are waiting on them and we will proceed only when the promise is resolved. So let's just try to connect. Let's say I'm a kitten and I can get inside the room and I can get out of the room. Nice, but we are not still working with the text, so let's fix that. Okay. I will be using this class. We only have to bother about several functions. So what this class does is it helps us to synchronize between our plain text and the auto merge state. So we should call this receive function or receive changes function when some changes from another peer comes to us. Handle doc update is a function which will be called by auto merge when it decides there are some changes to the state of the document and handle send changes will be called by auto merge as well when it decides to share some of the changes with outside world. So here's how we wire this function up. Again when the component renders, we are wiring up handle doc update and we are basically setting the text. We will do the changes broadcast later. We are starting the client and the last thing we are doing here is querying the history. We are using the history service to store all the changes made to the text area. Let's see this call. This is a particle as well, but it is a little bit more interesting mainly because we are using two different services here. First we are calling user list the same way we did in the previous particle. And we are asking for the authentication token. And then we are getting to the history service with this token. Otherwise it would say that we aren't unauthorized. So again we are sending particle as fetch just because we want data back and we would like to wait until the date is received. And once the date is received we are just updating our changes with the auto merge. Okay, let's try that. Cool. Let's say hello from Kitten, but it won't work just because we are not propagating our changes back. So let's fix this issue. So here we actually start to update our, to propagate our changes to other peers and we will do this with add entry API. I also wired up some updates. So when the text get updated I am updating this state with auto merge and I am doing it with this little function here and we are debouncing it just so in case someone writes too fast. So let's see the most interesting part here is the API call. And again this is not a particle, but it is different from the call. The usage of this particle is different from the others. So here we, again we are getting the token and we are adding our history log entry to the service. But we actually don't want to wait here and we don't need any data back, so we are just using playing send particle. This function will return a promise which would be resolved just when this particle got into the network. So it resolves almost immediately. So let's see how it works. We're still at Kitten. And we are trying to say hello to other animals. And we actually see that the text gets updated, but the updates don't happen immediately. So we should definitely fix that as well. So the way we do it is by changing the add entry API call. As you can see some more Airstrips, Airstript part has been added here. So what is happening here is we are querying user list twice. First we are getting the token, the same way we did in the previous example. And now we are also getting all users from the user list. And later we are actually iterating through the user list. And we are notifying every other user in this list that we have some changes to be applied. So this line here is actually a call to a service which should be implemented on the web client. So Influence web clients can also respond to service calls. And you might think of it as incoming notifications from the network. So something like WebSockets. But Fluence is a little bit more powerful than that. For example, we have an out-of-the-box ready-to-use request response pattern. So in order all this to work, we have to implement this callback. So with this particle we are querying where sending notifications to other peers. And now we have to respond to these notifications. So the way we do this is by using a function subscribe to event. This is a function you should use when you don't want to respond with anything. It would be just a subscription to all calls with the same service ID and function name. So please note that we are using the same variables here and here. So we are creating these services just for that. And we are receiving the same arguments. So we are passing the initiating peer who is doing the notification changes themselves and the token. And we are destructuring these arguments. And the actual workload is pretty basic here. We're just synchronizing changes with the auto merge. Let's try this. First let's leave. And it works. Cool. We have updates both here and from the, let's say, first browser. Nice. So you might be wondering what are tetraplets and why this is authorized is just simple Boolean. The short answer is you can actually with tetraplets, you can validate this that this is authorized has come from the right service and it hasn't been forged and it is verified by math. So I won't be doing this in my talk simply because it would be described in much more details in the later one. So be sure to check this our future talks. Okay, so the last piece missing here is the actual user list. I'm running out of time. So let me just quickly walk you through the code here. So let's first look how it works. So we have online status synchronization. We can leave and we see the user list update. We can open. So we can open another one. So third one, for example, and we get everything updated. And if you just close the browser, the user is getting offline after a couple of seconds. So let me just walk you through this code. We're using a similar subscribe to events. We are storing user list in a single map and we are refreshing. So the user has this should become online flag and we are updating this flag on timer. And the interesting part here is three subscriptions here. First one is the subscription to user edit event. The second one is a subscription to user removed event. And third one to notification that a user is online. So I would really like to show you how it works in Fluence calls. So first of all, I've added some calls here in the leave particle. So we are not just leaving the room. So we are not just calling leave from user list, but we are also getting all the users after that and notifying all of them. And the most interesting one is how online status is getting updated. So here we are getting all the users from the user list. And we are iterating this list. And we are basically sending a pin to another peer and getting back with the notification. So basically this works in the way that if a peer does not respond, we are not getting this notification and the user will become offline in the interface. And the way it works in callbacks, I need user list. We are just setting the online status. But I would point out that the most interesting thing here is that we have created the online status notification without any changes to either user list service or the history. So we've actually built another application on top of what we've got here and without needing to do any changes to the existing ones. And this is a really powerful feature of Fluence. So here is it. Here is how you can make a collaborative text editor with Fluence network. So here it is. Don't be afraid if you've missed something or you didn't understand some of the concepts or I believe that was quite a lot of code for a 20-minute presentation. So anyways, feel free to check our GitHub page. We have everything open sourced. The code for this demo will be available here in Fluence pod repository. Also check out our Fluence.js SDK and of course check out the documentation there. We also have a dedicated documentation page. So don't hesitate to visit it and check out the docs yourselves. And of course we have a very friendly community on Telegram. So feel free to come there and ask any questions in case you have any. So as far as I understand, we will be moving to some other place here where I will be happy to answer to all your questions.
|
Fluence is an open application platform where apps can build on each other, share data and users. Fluence not only allows hosting services inside p2p network but also provides JS SDK for building web applications, which communicate with the services. After the introductory talk, we will dive right into something very practical. We will demonstrate the process of making a web application with Fluence JS SDK. We will start with an empty create-react-app project and work our way towards the fully functional solution. By the end of this talk, we will develop a text editor, which synchronizes it’s state and the user online status with collaborators over Fluence p2p network. The application will be interacting with two minimalistic services pre-deployed to Fluence: user-list and history, but all of the features will be implemented on client-side without any need to modify existing software. Expect a lot frontend and a lot of code in TypeScript!
|
10.5446/57049 (DOI)
|
Hello everyone and thank you for having me here. My name is Alfonso de la Rata and I'm a research engineer at Protocol Labs and I'm here to talk about file sharing in peer-to-peer networks. Actually I'm going to talk about all of the research we've been doing at Protocol Labs research to try and improve file sharing and drive speed-ups in file sharing in peer-to-peer networks. So it is well known that file sharing and file exchange in peer-to-peer networks is hard because you have to worry about content discovery, content resolution, about the content delivery and there are a lot of notes in the network that potentially can have that content. And doing so without any central point of coordination, it's even harder because out there there are a lot of gamut of content routing system that help in this quest of trying to find the note that is storing the content we are looking for. Like for instance, BitTorrent had BitTorrent trackers in order to discover notes that store the content. In the Web 2.0 we see the DNS as the perfect system helping us to find the server that has the resource we are looking for. And in peer-to-peer networks we usually use a data structure and we organize content in a DHT in order to be able to find the notes that store the content we are looking for. The problem is that all of these content routing systems have their own trade-offs. So for instance BitTorrent trackers are, it's a centrally governed system, the same happens for the DNS, they are fast but they are centrally governed. And then we have the DHT that is like the big system in peer-to-peer networks, the peak content routing system in peer-to-peer networks. But the problem is that when the network is large and the system starts to scale, the DHT is pretty slow. So in order to overcome all of these trade-offs in content-routed systems, we came up with BitSwap. In the end, BitSwap is a message-oriented protocol that complements a provider system or content-routed system in the discovery and exchange of content in a distributed network. BitSwap is already deployed and it's already used in IPFS as the exchange interface and in blockchain as, sorry, in Falcoing as the Falcoing's blockchain optimization protocol. And BitSwap has a modular architecture that is really similar. In the end, BitSwap exposes a simple interface with two operations, a get operation and a put operation. The get operation is the one responsible for saying BitSwap that you want to find content in the network and download the content in the network. And then we have the put command that what it does is to store content in the network. So we will say, hey, BitSwap, this is the blog or this is the content or the file that we want to store in the network. And we see that the models in which BitSwap is comprised are the following. First we have a connection manager that leverages a network interface in order to communicate with other nodes in the network and exchange messages with other nodes in the network. Then we have the ledger. The ledger is to track, so whenever other BitSwap nodes send requests to our nodes, our ledger is to track all of the requests being made by other nodes. So in this way, we know what others are asking for and if we have that content in our block store, we will be able to send it back to them. And then we have the session manager that is the one responsible when we trigger a get operation is the one responsible for sending new sessions and orchestrating all the messages that will allow us to discover the content using BitSwap and then transfer the content or download the content from other nodes using BitSwap. The session manager leverages a content routing interface in order to communicate with another provider, I mean with the providing system of the content routing system that there may be in the network. So in the case of the example that we will be following throughout all of this presentation, which is IPFS, BitSwap complements the DHT as the content routing subsystem of the network. But BitSwap would be able to work with other content routing systems, for instance, DNS or a network database, and it would even be able to work in isolation without the help or the aid of another counter routing subsystem. We'll see in a moment how BitSwap works and why I'm saying this. But before we start with the operation of BitSwap, let's understand how BitSwap understands content and how it finds content and manages content within the network. So in BitSwap and also in IPFS, content is chunked in blocks. So we have, for instance, this file, this file will be chunked in different blocks that are uniquely identified through a content identifier or CAD. The CAD in the end is just a hash of the content of that block. And it's a way of being able to identify uniquely these blocks of a file or because of the duplication, these blocks can belong to more than one file as long as they have the same content. And these blocks are usually linked one to another in a DAG structure, like the following. And this DAG structure can represent a lot of things. From a file, like, for instance, a file with a lot of blocks that are the CAD route, it stores links for the rest of the blocks that comprise the file or it can represent a full file system. This will be the case, like, if, for instance, we have a directory, this would be the root of the DAG structure would be the directory that would have all the links for the files in the directory. In this case, we can have, for instance, like three files, and each of these files can be comprised of two blocks. So all of these items will have a CAD and here we will have, like, the CAD route with links to the files and the files with links to the blocks that comprise the files. And this is how BitSwap understands content or content in the network and interacts with content in the network. And I think that is worth noting before we start with the operation of BitSwap is the common request patterns that are used when finding content in a peer-to-peer network and specifically in IPFS. Usually we can find a common request pattern and the importance of knowing a common request pattern is that BitSwap will behave differently according to the request pattern used. So for instance, a common request pattern is when we are trying to find, we're trying to fetch a data set or a full file. For instance, let's consider this a data set with a lot of files where this is the name of the data set, there are some of the files in the data set and each of the files in the data set is conformed by a different number of blocks. So in this case, what BitSwap will do is first, so we say, hey, I want to get these data sets using the interface that we've seen, the get command that we've seen in the BitSwap interface. What BitSwap will do is to first gather the root, the CID root of the DAG structure and once it gets this block, it will inspect this block and check the links for the next level. It will get these blocks and once it gets these blocks, it will have knowledge about the blocks and through the links of these blocks, it will get knowledge about the blocks in the next level. And in this way, BitSwap will start traversing the DAG structure and gathering all the blocks in the DAG structure. This DAG structure could be as deep as we want it to be. This is one of the common requests patterns, but we have another one that is also really common that is the one we would use when we want to download the assets to render a website. So imagine that we have this directory that stores a website and we want to render the web page, so this part, web page.html. In this case, what BitSwap will do is, the first thing it will do is to get the CID root, the root of the DAG structure and it will traverse one by one the path instead of like going level by level trying to gather all of the blocks of the level as it was the case in this request pattern. It will take the CID root, see the links in the CID root, follow the one it is interested in, which is page. Once it gets the block for page, it will get the links for this block and go to the one for doc.html and once it gets to doc.html, this is the content that it actually wants, the site that it wants to, or the file that it wants to render, it will take, read all the links in the block and get all the blocks for the level of the doc.html. These are two of the common request patterns used when fetching content in like in EpiPest and with BitSwap and this would be the flow that BitSwap would follow. So as I've said, BitSwap is the exchange interface in EpiPest. To understand the operation of BitSwap, let's see how it would work when fetching a file in EpiPest. As I've said, BitSwap is a message-oriented protocol so we will see six different messages in BitSwap, three requests that are the one-half, one block and cancel and three responses, the half, the block and the don't have. So when we want to fetch a file from the EpiPest network using IPFS, what EpiPest does is it first checks if that file is in its block store. So it checks, imagine that I want to download the doc.html that we've seen before. The first thing that it does is to check if the blocks for that file are in the block store. If this is not the case, EpiPest triggers a get operation in BitSwap and triggers a new session that starts looking for all of the blocks for that doc.html file that we've seen above. So the first thing that a BitSwap session does is to broadcast a want message to all of the connected peers for the BitSwap node. So this want message, what it does is it's one-half message that is saying, hey, from all my connections, please let me know if any of you have the block for this CID. So in this broadcast, what we are trying to find is the CID route for the content that we're looking for. So in this case, we would be looking, if we're trying to find a full file, we would be looking for this specific block. It would be the block CID1 that we try to find in the broadcast stage. And for the doc.html, the same, we will try to find this CID route that will give us information about the links in order to get into the doc.html. So we sent to all of our connections a request to check if any of them have the block for this CID. And in parallel, what BitSwap will do is to make a request to any of the available providing subsystems in the network, in the case of IPFS is the DHT for that CID. So in case my current connections, none of them have the content, I have a way of knowing who in the network is storing this content. And in this case, the CID route for the CID route block for the content that I'm looking. So according to if these nodes have the content or not, they will answer with either a half message saying, hey, I have this content, or with a don't have a message saying, I don't have this content. But also the DHT, through the DHT, we will have knowledge about if the, I mean, about a node that I may not be connected to that also has the content. The session, the BitSwap session, what it does when it receives these responses is to add all of the peers that have responded successfully to this request into the peers of the session. So in the subsequent interactions for the discovery and exchange of content, instead of asking all the nodes, all the connect nodes, the session will only ask to the ones that have answered successfully to this request. In this case, these three nodes. This is the view from the peer that is requesting content for the network. But what happens, what is the view that a peer that receives this request has? So what it happens, imagine this broadcast from Pirae that is receiving. So if we have Pirae that is sending a one CIT for, a one for CIT one, a one for CIT two to peer B, what peer B will do is, according to the request that is receiving from Pirae, update its one list in the ledger. We've seen that the ledger is this model used to keep track of what CITs, what blocks, other nodes are looking for. So in this case, peer B will keep in its ledger information about the blocks that A is looking for. So in this case, peer B may not have the blocks and send a don't have to peer A saying, hey, I don't have it, but peer B will remember, will still remember the blocks being requested by peer A in order to, in case it receives the block by chance by any other channel at any time and peer B sees that in the ledger, in A's ledger, it has, A is still looking for CIT one, what it would do is like, if you see the blocks and directly, like immediately forward it to peer A, so it has. And once it sends this block to peer A, as peer A already has the content, it can be removed from the ledger. So this is the view from the peers that are receiving requests from other nodes in the network. And what is the flow, because here we've seen, I mean, we do this broadcast in order to see who's storing the CID route for the content I am looking for. But what is the process from the discovery to the actual download or exchange of the block? So we've seen that here, peer A will send one have 12 of its connections, peer B, peer C and peer D, and they will answer according to if they have it or not have it. So the one have all of them have the file and they will answer with a half message and they will be added in the session. As the first response that is received from peer A is from peer B, peer B what it will do is say, okay, if peer B has the content, I will directly ask for its exchange. So peer A sends a one block, in a one block what we're saying is, hey, please send me this block, I already know that you have it, so please send me this one block. And peer B will answer with the block for the CID route of the content we are looking for. In this case, peer C and peer D may answer afterwards to the request, but what happens is that peer A won't ask for that block, but it will keep the knowledge that these two peer C and peer D, they potentially have the rest of the deck because they have the CID route. And from there on, so as we've seen, once we get the CID route, we can get knowledge about more CIDs in the DAX structure because we inspect the links and we know what to ask for. And from there on, like as the three of them are inside the session because they've answered successfully to this broadcast message, we can start like asking for more blocks in the next level of the DAX. So here, for instance, to peer B, we can send directly because we have knowledge that potentially it has already, I mean, at least has the CID route, it potentially has the rest of the levels of the DAX structure. So I can instead of like sending one half and one block and having to go back and forth, I can send directly some one blocks and the rest one halves. In this case, in order to have a multiple path of exchanges, PWA sends a non-overlapping request of one blocks to all of the peers in the session. So in this way, we are like spraying the request and trying to get the content. If at one point, so another thing that is worth noting is that Bitswap messages, inside Bitswap messages, we can put more than one request because inside the envelope, we can have a list, a one list of requested CIDs. So in this case, for instance, if let's take the exchange of PWA with peer B as an example, here we're sending for these three CIDs, a one half and for the rest, sorry, a one block and for the rest of one half. And we see that according to if it has the block or not, it will answer with the blocks to the one blocks with halves to the one halves and with don't halves to the blocks that it doesn't have. Either it's a one block or a one half, if peer B doesn't have these blocks, it will answer with a don't have. And this back and forth of request of one halves and one blocks is followed over and over again until the DAG structure. So once we have the CID route, we can start getting all the levels until we get all the blocks for the content we were looking for. But what happens at one point because here we are only communicating with the notes inside the session. So the ones that have successfully answered to the CID route to this broadcast and also like the ones that I may have found through the DHT request or the content routing system query. So what happens if at one point I keep receiving don't halves for all the blocks I'm asking for in the peers of the session. So imagine that after sending this request, peer B says that it doesn't have any of these blocks. In this case, peer A will remove peer B from its session and say, hey, I'm not going to ask this guy again because he doesn't seem to have the rest of the blocks of the DAG I'm looking for. And this may be the case. In some cases, we may have peers that store only the top levels of the DAG structure and not all of the DAG structure that you may be looking for. So in this case, as peer B doesn't show, I mean, it seems not to have the blocks I'm looking for anymore. It's removed from the session. What happens if all the peers in a bit of subsessions are removed? Well, we have to do another discovery, another broadcast stage in which we communicate, we search to the providing subsystem to try and populate with potential nodes storing the content. And also we, again, broadcast all of our connections just to check if any of them may have gathered those blocks in the time that I was trying to interact with only the nodes in my session, or maybe I have new connections that they already have the content. So in this case, in this broadcast, another thing that we have to bear in mind is that in this case, the broadcast, imagine that we got into this level, and after this level, all of the peers in the session don't have the rest of the blocks. So something that we have to bear in mind is that this broadcast, instead of doing the broadcast for this, say, we already have these two levels, we start the broadcast for the nodes where we ran out of peers in the session. So it's a way of populating the session again with candidates that potentially store the content, and in order to restart the download of the rest of the blocks that we were planning for the specific content. And finally, what happens if, for instance, PUA gets a block because PUA is communicating with a lot of nodes at the same time? We may have a lot of nodes in our peer session. So every time that PUA receives a block from, maybe in this case, let's consider PUA interacting with PUA as one of the peers in the session. If at one point it receives the block from another peer that is not PUA, PUA will send a cancel message to all of the nodes in the session so that, I mean, in order to notify them that it is not looking anymore for that CID and for all the nodes in the session to remove the CID1 from ACE ledger. So from there on, even if PUA receives the CID1, it won't forward it to PUA because now it knows that PUA has found the block from other peer in the network. So this is basically how BitSwap works. We did an extensive evaluation of trying to compare BitSwap against, for instance, the DHT. So in this case, we made some tests in which we had a peer network, an IP address network, where in order to find the content, in order to find a block in the network, you had to resort to a query in the DHT and we compared it with BitSwap in which, I mean, we had the CIDR within the connections of, so in this test, what we had is like 20 nodes in which we had a lot of leeches, like 19 leeches and just one CIDR. In the case of the DHT, in order to find the block, you had to search through the DHT and in this case, as all the nodes were connected one to another, the CIDR is connected directly to the leecher and BitSwap does like this back and forth of one-half-one blocks to discover the content and exchange the content. And what we realized is that BitSwap is always faster than the DHT to find content in the network as long as any of the neighbors, any of the neighbors of the leecher, I mean, as long as any neighbor of the leecher has the content. Then we did another test in order to see how BitSwap and the DHT behaved when the number of nodes increased in the network. I mean, this is not a really meaningful result because we are talking about a dozen nodes and the real impact of using BitSwap compared to the DHT will be seen when we have more nodes. But we see that the more nodes there are in the network, the slower is the DHT lookup and in this case, I mean, BitSwap may have a bit more overhead to find like with all of these broadcasts and so on to find the CIDR that stores the content, but once it is found, the exchange of the block is straightforward. Of course, there is something to bear in mind here. BitSwap is really fast as long as one of the connections of the BitSwap node stores the block. If this is not the case, the DHT has 100% probability that it will find as long as the node, the content is still storing the network, it will find a node storing the content. This is not the case in BitSwap where if we don't use any content-related system, it won't find, if the content is not in any connection of the node, it won't find the content. But that's why we use BitSwap as a complement to the DHT because imagine that the first block, the CIDR, because for instance, it may be the case that for the content we are trying to look for, none of our current neighbors has the CIDR or the content we're looking for. So then is when we resort to the DHT or any other content routing subsystem to find the CID route. Once we find the CID route, what we do is to add this peer to the session and we establish a connection with it. From there on, we will interact directly with this peer and there will be no need to resort to the DHT. So from there on, there is no DHT lookup but we will leverage the connection that we already established in BitSwap while searching for the CID route for the rest of the backup for to find the rest of the blocks in the network. So that's why BitSwap is so interesting to be used as a complement to other content routing subsystems as a DHT. But this is the baseline operation of BitSwap. And one thing we realized what we were doing all of these experiments is that BitSwap has issues. I mean, it's not perfect. And this is how, I mean, after this realization is how the beyond swapping this project, research project started because we realized that BitSwap currently is a one size fits all implementation and it may not suit every use case and every kind of data because we may have, for instance, applications that they want to be really fast in the time to first block, but we may have applications that need to exchange a lot of data. So there are a lot of great gamut of applications and BitSwap doesn't have a way of being configured in order to be fined for any of these applications. And we realized that the current search or discovery of content that BitSwap does is blind and deterministic. So it doesn't worry about what have happened before in the network. If we see this broadcast stage in the BitSwap protocol, it sends a one half to all of the connections and it doesn't care about what happened before in other sessions or in other requests for content or whatever other events in the network. It just broadcasts everyone and tries to gather information about who has the content. And we started realizing that maybe we can do this search smarter, leveraging the information that is out there in the network, in other protocols and in the actual interaction of BitSwap with other nodes in the network from previous interactions. And also we realized that BitSwap requests are pretty dumb and dumb in the sense that they are plain requests where we are requesting a list of CITs, a one-list. And instead of this, we could, as we have defined structures and the DAG structures, we could think about more complex requests where instead of asking for the blogs one by one and having to go back and forth in order to discover the links for the rest of the levels of the DAG, maybe we can find a way in which we can perform queries in which instead of saying, give me this CID and this block for this ID, we could say, give me this full DAG structure or this branch from the DAG structure and make a complex query where implicit in the query, we have the array of blocks that we are looking for and the list of blocks we are looking for instead of having to find out by ourselves the blocks that I am looking for. And finally, of course, we could make BitSwap more efficient in the use of bandwidth. And this is how, like with these realizations, is how the BeyondSwap in this project, this is a project started. This is an ongoing work. Here in this repo, you will find all the information. And I highly recommend going there and checking out because there are a lot of ideas and prototypes that we are searching and we are inviting everyone to contribute in order to make, we also have the testbed where we are doing all the tests and we are inviting everyone to join our request. But to give you a glimpse of what we have done so far, we have already prototyped three of the RFCs that have been discussed in that repo. We have explored the use of compression at the network interface. We will see in a moment. We have explored the use, like the gathering of information about what is happening in the network to make more efficient search of content. And we have added a new model that is the relay manager in order to increase the range of discovery of BitSwap messages. And we started with compression. So we started thinking, okay, HTTP already uses compression in order to download data from the web. And if HTTP does it, why aren't we using it in order to make a more efficient use of bandwidth? And we tried three strategies. We explored the use of just like the same way that in HTTP, you can compress the body. We said what if we compress the blocks? And what happened is that in the end, we had some savings in bandwidth, but there was an overhead from having to block per block compress all the blocks included in our BitSwap messages. So then we said, what if we use full compression? Every single BitSwap messages is compressed. Again, we saw kind of the same behavior. But then we realized, okay, and what if we go to the network interface that in the case of the network protocol that in the case of IPFS and BitSwap, the BitSwap implementation in IPFS is leap year to peer. And we implement compression, string compression at a protocol level. And that's what we did. We explored the use of compression at a protocol level for BitSwap and for leap year to peer. And what we managed to do is that with smaller overhead than in the above schemes, we managed to, for certain data sets, to have up to a 70% on bandwidth savings. So this was our first win. And actually you can find, like I'm adding here some URLs where in the block of PL research, you will be able to find all of our contributions and all of them. Well, I mean, we've been documenting all the work that we've been doing around the BitSwap project. So once we had compression going on, we said, okay, we saw that we can leverage information from previous interaction in the network in order to make better discoveries of content using BitSwap. So the next thing that we implemented was a one message inspection in BitSwap. What we did is that we said, okay, if a node is requesting content, it may potentially be storing it in the future. So instead of having to broadcast everyone when we try to find content, if any of the nodes, any of my connections have requested for that CID before, then instead of asking everyone, let's go and ask directly for the blog to the one that have requested it before. So what we implemented is a one message inspection in which BitSwap nodes will inspect the requests from other nodes and for each CID, half a list of nodes of the 10 top nodes that have requested recently that CID. So that instead of broadcasting everyone, I will ask, send a one blog directly to the node that have requested that CID recently. So if we go to this architecture, it's similar, the PIPLock registry is similar to the ledger. Or in the ledger, like whenever a peer has found that blog, it sends a cancel and then we remove that blog. In this case, we are also tracking the requests from other nodes, but in this case, we are uploading it live in order to have knowledge about the node, the peer that has more recently requested for that CID. So in the discovery phase, instead of like sending one half for everyone, we just send a one blog to the top peer that have requested the file recently. We did some tests and we explored, we did some experiments in which we had like 30 nodes where we had just one CIDR and a lot of leachers trying to find content, but these leachers came on waves. So what happened is that the more, of course, in the baseline, the more nodes that had the content, the easier that it was for the nodes to find the content, but if we go to the one inspection improvement to the prototype, what we see is that we reduce, even when a lot of nodes have the content and the time to fetch a blog stabilizes, what we see is that we reduce in one RTT the time to request a blog because instead of having to send a one half to everyone and then send a one blog to get the blog, here what we're doing is that we're directly sending a one blog to the guy that we know that have requested recently the content and potentially has it. What happens if this guy doesn't have the content? It doesn't matter. Like we start, we lost an RTT and we start over again, like we do all the traditional one blog, one half with the discovery that baseline bits have used. And another interesting consequence of this prototype is that we significantly reduced the number of messages exchanged between nodes because instead of having to send this one half, even if you know already who has the content and all these one half, one box and so on, if you have in the peer block registry an entry for that CID, you directly send a one blog to that guy. So you're reducing in the number one blocks that you have to send and in the number of back and forth that you have to do with other nodes in the network. So another big win for BitSwap and the next thing, like we went one step further and we said, okay, the problem is that if one, if our nodes, any of our neighbors has the node, the block that we're looking for, then we have to research, research to the content routing system. But if we add a TTL to BitSwap messages so that these two BitSwap messages can't jump, that means that even if the neighbors of our neighbor has the content, I don't have to resort to the content routing subsystem to find the content and I can go directly, like I can use this, my neighbor as a relay to find the content that is in my neighbor's neighbor. So here we see how this would work. I mean, if PRA right now sends a watch message to PURB, what it happens in the baseline implementation of BitSwap is that PURB doesn't have the content, say, hey, I don't have it, and then PRA has to find its own way to find the content. But in this case, what we will do is that when PRA sends a watch message to PURB, PURB will start a relay session and broadcast like forward these messages until the TTL is zero to its own neighbors. But according to PURB, the content or not, PURB and PURD will answer accordingly to PRA. So what we're doing here is that PRA will be communicating with PURB and PURB as if they were both of them were PRA's neighbor using PURB and PURD as relays. And with this, what we're doing is that we're increasing the range of discovery of BitSwap without having to resort to an external content routing system. And the results were pretty pleasant because, I mean, this is the, here what we had is 30 nodes, where we had like one single seeder, we had a lot of passive nodes that passive nodes in the end, they just run the BitSwap protocol but do nothing. And then we had a lot of leeches trying to find the content. Seeders and leeches couldn't be connected to each other directly. So they would have to, I mean, they would have to resort to a content routing system to find the content because they don't have direct connection or use these jumping BitSwap to find the content in the seeder. And what happened is that we see that the DHT having to do these requests for the DHT to find the seeder is lower than using the TTL and jumping through the passive node to find the seeder and use the seeder as the relay to communicate between the leecher and the seeder. And another interesting, so what happened here, we see these results, what happened here is that we say, okay, so we're sending a lot of want messages, we are exchanging a lot of information between nodes, we have a lot of requests for and with the network, what if we mix the jumping BitSwap, so the use of TTL and BitSwap messages with the pure block registry, the want inspection, because as we are getting more information from nodes that are a few hops away from me, I can leverage this information to make more direct search. And this is actually what we did. And what happened is that it actually worked and the fact that we were gathering more information in the pure block registry because of all of these flow of want messages through passive nodes and relayed and forwarded want messages. And the fact that we can make, like instead of sending all these one-half-one-block ones, we know where the block is, we can do directly this one block and get the block back, it meant that there was a significant increase, I mean improvement in the time to fetch blocks. Of course, this always comes with a trade-off and the trade-off is that the fact that we are using symmetric routing, so that to gather the block, we are using the same path used to, so because what happens with P-Ray, if P-Ray goes to the DHT to find who stores the content and see how the content, what it does is that directly it establishes a connection we see. So from there on, like the communication is directly between A and C. For our jumping P-Trub, what happens is that we are using P-B as the relay and P-A may be connected to B and D and B and D may be connected with each other. So the thing is that in the end, there are a lot of messages flowing the network and they may be a lot of blocks, if more than one relay here finds the block, it may be a lot of blocks flowing into the network. And that's why we see this increase in the number of duplicate blocks flowing around in the network compared to the case in which we use the DHT just to discover the node that stores the content and directly communicates it. We are already thinking ways of improving this because actually we use, instead of using the relay session to perform the exchange of the content, we use asymmetric routing so that we just use the TTL to find and discover the node that stores the content and then the same way that we do with the DHT, we establish a connection directly with that guy, we would reduce the number of duplicate blocks here. What is the problem of duplicate blocks? In the end, it's an inefficient way, I mean an efficient use of bandwidth. But this is all that we have tried so far to improve file sharing in peer-to-peer networks, but this is an ongoing research and I invite everyone to join us in this quest. There are a lot of ROFCs with potential ideas of improvements, not only to improve file sharing in IPFS or in BitSAP, but to improve file sharing in peer-to-peer networks overall. So have a look at them and join our discussion in order to give us feedback about what is happening out there. There are already research and development teams building prototypes for the ROFC and coming up with new ROFCs that are being discussed in the repo. So in the end, if you like all of these topics, help us make file sharing in peer-to-peer networks placing files, going into this repo, joining the discussions and proposing new ideas and prototypes. Here you will also find the testbed and ways to replicate the results that I've shown throughout the talk. And that's all for me, please, if you have any questions or you have any feedback.
|
This is an overview of IPFS integrations across various platforms, devices and network transports - including browser integrations, video demos of IPFS apps on a native web3-based OS on Pixel 3, IPFS content loaded into various XR devices like Oculus Quest and HTC Vive Flow, and mobile-to-mobile IPFS apps via Bluetooth LE.
|
10.5446/52255 (DOI)
|
Hi all, my name is Neil. I'm one of the leading engineers on the Dendrite and the PTP Matrix project at Element, and my talk today is called Pine Cones and Dendrites, which will be a bit of an update and some light storytelling about peer-to-peer matrix, as well as our plans for further development. First of all, I'd like to start by taking the opportunity to talk about Dendrite, our second-generation matrix home server, written in Go. We picked up work again on Dendrite at the end of 2019 after a couple of years of relative inactivity. Dendrite was originally planned as the go-to home server for large-scale matrix deployments. It's built using a microservice architecture that can optionally scale components across different machines, which sounded ideal at a time that the public matrix.org server was experiencing routine growth pains. Dendrite is very important in the PTP matrix space. It's the home server that we use for all of our PTP development, experimentation, and demos. We've also seen increasing engagement from the community and a number of excellent contributions, and we finally decided in October 2020 to move Dendrite out of Alfa and into Beta. Dendrite is usable today. It still has some bugs and it still has some missing features, but the core matrix experience is there and it's functional. At the end of December, we also took the step to build our own public Dendrite instance at dendrite.matrix.org, which is open for public registration. If you want to see how using Dendrite feels without having to build your own, this is the way to do it. As well as being able to scale up, we've proven that Dendrite can also very efficiently scale down, which has made it the perfect testing ground for our peer-to-peer experiments. It can run lean enough to make single-user deployments on low-power devices sustainable. But first and last year, we announced the PTP matrix project. Our goal was simple, to move matrix home servers out of data centers and right onto end-user devices instead. It's an ambitious project with ambitious goals, but our belief is that peer-to-peer matrix is the next logical evolution that will take your data away from remote location and bring it back into your own hands on your own devices. But why are we doing this? Today, matrix is known as a federation of home servers, the idea being that matrix is decentralized. A user can pick a home server of their own choosing, or even run their own, and still talk to users on other servers. The matrix federation currently homes over 100,000 users, but there are still some pretty significant issues. The first being that to truly embrace the spirit of decentralization, you shouldn't really have to rely on someone else to run a home server for you. Building a home server isn't terribly difficult, but it's certainly not simple enough for the majority of everyday users, especially since they need to be maintained, upgraded, fed, watered, etc. The net result here is that we tend to see a lot of users centralizing around the matrix.org home server and a small number of others, either because it's not obvious that there's an alternative, or because it isn't really easy to know what to look out for when you're actually choosing a home server. And that's not great when we're supposed to be encouraging people to avoid centralization. Users also bring lots of questions about what information is being stored on their chosen home server, will their private chats be visible to the owner of that home server, what metadata is collected, and we continuously watch as companies try and take advantage of user data, most recently the tighter integration between WhatsApp and Facebook, and privacy is becoming more and more important and much closer to the public eye as a result. And finally, there's still a fairly big problem that a user might register on a home server that's run by someone else, and then one day that home server might just disappear from under them without any warning, and there's nothing that they can do about it. So one of the big questions facing the P2P project was what it means to be a decentralized system versus a distributed one. As we've seen, it can be quite difficult to promise decentralization when the path of least resistance is decentralized, and the matrix.org home server has testament to that since it's become a rather accidental point of centralization, even though it was never intended that way. Pushing further into the distributed territory forces us to think carefully about how to level the playing field, removing obvious points of centralization and making it easier for users to get started with matrix in the process. Over the last year, we've built not just one, but four separate demos as a means to explore what P2P matrix might look like. We picked different technologies and targeted different platforms as part of these demos, and I'd like to talk next about what those demos look like. In each of these demos today, the client server API remains unchanged, so in theory, it's possible to take any existing matrix client and use it with P2P just as you do with an existing home server today. The first demo we built for FOSDEM last year was the dendrite demo libp2p binary. It supports local discovery of peers on the same network, and servers are identified by their public key. You can publish rooms into a directory which can also be discovered by nearby users, and you can join those rooms and chat away as normal. However, the demo is very limited. It doesn't work over the internet or even outside of a single subnet for that matter, and it doesn't have any glue like a DHT to actually help to discover other nodes elsewhere in the world. Even if it did, libp2p still appears to be quite centered around the idea of nodes being globally rootable to one another, which isn't always necessarily going to be the case in areas with limited connectivity. The second demo was a bit of an evolution of the first, but instead of building a standalone binary and having to run the process yourself, we took it a step further and asked ourselves what it would look like if dendrite was running right there in the web browser. And we achieved this by compiling dendrite as web assembly. We used libp2p again for this demo, but running in the web browser has a number of limitations, including not being able to access the usual host networking that we were able to leverage with the first demo, so there's no multicast peer discovery for starters. To work around this, we built a libp2p run-devue server to allow users to discover each other over the internet, which effectively acts as a traffic relay. This is an unfortunate point of centralization, and it isn't remotely compatible with our real project goals, but what it helped us to prove is that we can run a full Matrix home server right there in the browser, and the user doesn't really need to do anything special for its work, aside from just showing up with a supported browser. With WebSocket's WebRTC inference, it's very likely that we would be able to extend this model further in the future, without having to rely on centralized run-devue points. The third demo is very much like the first one. It's another standalone binary that you can run on your own machine, but instead of libp2p, we swapped it out for igdrasil. Igdrasil is a very different animal to libp2p, because instead of assuming direct global readability over the internet while other nodes over the network, igdrasil instead builds up an overlay network, where all nodes are basically equal participants, and any node with more than one peer connection can forward traffic on half of other nodes. It's still highly experimental, and it comes with some problems, which I'll discuss shortly, but it gives us something that the libp2p demo didn't have, which is the ability to grow beyond the local network by connecting to other igdrasil nodes over the internet. Furthermore, existing igdrasil nodes don't require any modifications whatsoever in order to act as suitable routers for our demo. And finally, there's the most recent demo, which we built on iOS, which is an attempt to explore what ptp matrix might feel like if it was running on a mobile device. To do this, we cross-compiled the Android from, and we spent a little bit of time embedding it into element iOS itself, of which there's now a ptp variant available on test flight. This demo actually started off being based on igdrasil, but more recently it has become the testbed for our own research project named Pinecone. The iOS demo has one special magic trick which none of the other demos had, which is called AWDL, and that's that it's the ability to communicate with other nearby devices using the same demo, regardless of whether or not the devices are on the same Wi-Fi network, regardless of whether or not they have any mobile data connectivity at all. It's a truly ad hoc demo. You can take a handful of devices out into the woods, and as long as they're within wireless range of each other, they'll be able to communicate and relay on behalf of each other, and the network topology builds up automatically. In some ways, this is very close to the ptp dream, completely zero configuration networking, and it sort of works today. On the left hand side you'll see the iPad display, on the right hand side you'll see the iPhone one. These two devices are physically close to each other, so I'm going to go ahead and open up the ptp app on the left hand side. This has only been opened once before, basically to just quickly register an account on the local Dendrite instance, which is actually embedded into the app directly, and you'll see that I've created a test room. And at the top left you'll see a piece of text that says no connected peers. I'm going to go ahead and open it on the right hand device. And you'll see straight away that both sides have now detected peers. It shows two peers in this instance, because one of them is over the local Wi-Fi network, and the other one is using AWDL, but effectively it's just two connections between the same pair of devices. And I should be able to go into the room directory here, on the right hand side, browse directory, and find test room. So at that point I will join, and it shows on the left hand side that the iPhone has joined. So I'll send high. And this appears on the left. In both of these instances, the devices are communicating directly. There's no centralized home server being used or anything like that. This is Dendrite running directly on the iPad, communicating with Dendrite running directly on the iPhone. I briefly named our Pinecone there, so at this point I'd like to formally introduce Pinecone. Pinecone is a research project about how to develop an overlay routing scheme for PDB matrix, and I'd like to talk a little bit about why we're bothering to develop something like this ourselves, rather than just using something off the shelf. Pinecone is heavily inspired by Adressill, and it shares much of the same design. Works by building a global spanning tree, of which all nodes are a path of, and then assigns them a set of coordinates based on their position relative to the root of the spanning tree. These coordinates are basically the path taken from the root down to the given node. It is used as hash table, is used to assist one node to find the coordinates of another node. A core design choice in Adressill is that it is a greedy routing scheme, which is where every node on the network makes a forwarding decision based on local knowledge only. The protocol only allows forwarding traffic to appear, which takes it closer to its destination. We did successfully produce demos using Adressill as a transport for matrix federation traffic, but there are some problems with it. The biggest of which being that all the network locators being relative to parent nodes makes the network fragile when topology changes, and that can happen for a variety of reasons. In this case, it's possible that parts of the network will need to renumber with new sets of coordinates if one of the parent nodes or the network root node disappears. When this happens, traffic across the network is disrupted, and in that short period of time when the network is trying to reconverge, nodes can no longer depend on their local routing knowledge to forward traffic properly. Then when the network does eventually reconverge, we have to start searching the DHT again for the new coordinates for the nodes that we want to talk to, and that can result in quite a lot of protocol traffic, especially on a big network. In addition to that, we've also seen cases where unstable network links can result in storms of constant coordinate changes, which can render the network unusable for a period of time. The Adressill project is exploring protocol changes to mitigate problems like this, but in the meantime, we've also been trying to work on some mitigations for these problems. Pinecone is our attempt to explore whether we can improve upon the Adressill design by either using source routing, virtual ring routing, or some combination of the two. In cases, we have been standard-grading routing from Adressill fails. So far, Pinecone has the port for source routing, which allows pathfinding between two Pinecone nodes to work out the exact path through the network to take to reach another node, and then reusing it exactly. This gives us some resilience against the coordinates changing, as the source-rooted path will be unaffected, although it does add additional fragility in the form of the path being altogether broken if a node on the path disappears. In practice, Pinecone will be able to run additional pathfinding in the background to ensure that the most optimal path is being taken and to keep backup paths cached in case the active one fails. We're also planning to investigate virtual ring routing, which more closely marries the Reaching Information Vase with the distributed hash table or DHT. This will work by having nodes find their nearest key space neighbors on the network and maintaining these paths so the traffic can be forwarded in the correct direction based on the target public key. And we take shortcuts every time we intersect another one of those paths that takes us actually closer. This rather heavily reduces the reliance on the coordinate system of the spanning tree altogether, but most importantly, it has another side effect, which is that it shows really promising results in node mobility tasks, which helps particularly in cases where devices are moving around a lot or their connectivity changes a lot. Pinecone's overlay routing is designed to solve the problem of how P2P matrix nodes will find and communicate with each other, but this is ultimately only solving one part of the problem. Today's matrix federation protocol is full mesh, and that's a pretty big problem for us. Regular home servers today often have good connectivity and good uptime, and it's reasonably likely that there will be lots of users on a single home server, so it isn't so much of a problem there, but we don't have any of these guarantees in the P2P world. It's very likely there will be lots of single user home servers, and there will probably be offline often, and they might not have great connectivity even when they are online. Therefore, it will be a secondary goal of Pinecone to assist in building per room topologies so that we can effectively gossip events to other online servers in the room, and to identify when servers are offline for the purposes of storing and forwarding. We'll be working on P2P matrix a lot throughout this coming year, so keep an eye out for more demos. I've talked a fair bit about how we can get P2P nodes to communicate with one another. I've also talked about the kind of changes that would need to happen to the matrix federation protocol, but there's still a fairly big elephant in the room, and that's how we handle user identities in the P2P world. Our early stage proposal is that a matrix spec change proposal called portable identities. In the matrix world today, user identities are very closely associated with the home server that the user registered on. In my case, that's matrix.org, and that's right there in my matrix ID, but this fundamentally goes against the model that we have in mind for P2P matrix, which is that if we want to bring logic of home servers closer to the end user, perhaps even running on their own computer devices, then we need to be prepared for users having multiple devices and therefore multiple home servers. So this opens up two questions. One is how we will handle a single user identity across multiple home servers, and the other is how users will be able to get and keep human-friendly alias as like today instead of having to deal with public keys. Our goal here is to factor out the concept of a user identity so that there's no longer a built-in assumption that a user ID is granted by a specific home server. In MSC 2787, our proposal is that this will be a cryptographic key pair, and the user will be able to use this key pair not just to prove their identity to a home server, but also to sign attestations that grant home servers to act on behalf of that user for a specific amount of time. When I say act on behalf of, what I mean is that the home server will be authorized to send messages into the room for a user, handling fights, and things that a regular home server would do today on behalf of its registered users. A user will be able to choose to not renew an attestation at any time, at which point the home server will lose its power to act on behalf of that user, if for instance it was discovered to be malicious or the user changed their mind about where they want their data to be held, etc. In addition to this, it's not reasonable to expect that mobile devices or desktop PCs will have DNS names, but we do expect that matrix home servers that are static in data centres will continue to exist, even if they are also speaking the PTP dialect. We envisage that these home servers will be able to grant a user a friendly alias based on the server's DNS name in exchange for an attestation. And this will pretty much take the form of a directory lookup where you ask the server about an alias, similar to how real aliases work today, and it will return information about the user's cryptographic identity and attestations, as well as being able to then handle incoming events for that user. And that means that the user won't have to deal with handing out public keys, and it also potentially opens up the option for a user to have multiple aliases from more than one home server instead of just one like BD Today, which grants them some portability. Finally, we'll need home servers that a user belongs to to be able to backfill from each other and replicate the user's room memberships and perhaps some history so that the user can pick up any of their devices and see the same rooms with the same timelines. It's important that the user doesn't lose everything, or if they lose one device as well, so attesting multiple home servers or devices will not just serve for a consistent user experience across those devices, but it also serves as a form of backup. MSC 2787 as a proposal is still incomplete, and there's still quite a lot to figure out, but this should be the final piece of the puzzle that would make P2P usable. We'll be continuing to post more updates on our P2P and our Dendrite channels, so please feel free to join us on those on Matrix. Otherwise, thank you for listening.
|
Matrix is an open protocol for secure, decentralised communication - defining an end-to-end-encrypted real-time communication layer for the open Web suitable for instant messaging, VoIP, microblogging, forums and more. We introduced P2P Matrix at FOSDEM 2020, and throughout 2020 we've been working on improving P2P Matrix. This includes massively improving Dendrite, our next-generation Matrix homeserver implementation, implementing P2P Element for genuine mesh networks on iOS via AWDL, using Yggdrasil as a P2P overlay network - and more recently implementing Pinecone; a next-generation P2P overlay network inspired by Yggdrasil which supports source routing and virtual ring routing as well as typical greedy routing. In this talk we'll show off all the progress and give a VIP tour of Pinecone.
|
10.5446/52256 (DOI)
|
Hello everybody, welcome to our talk about data sovereignty and zero trust architectures. We appreciate the chance of being able to present for you here today. We hope you have been enjoying this virtual event so far. Even for us it is a new experience to present here today. My name is Stefan and I am the founder of Pilar Enterprise Architects, the team of seven situated in Cologne in Germany. Today I brought along two very dedicated co-workers. First please meet Marvin. Marvin is an enthusiastic about security. In fact sometimes it feels almost like an obsession. Marvin will join us today and if you keep an eye on him you will see if he is happy with what he sees. Next up is Eliza. She is always eager to communicate and to create new connections to identify opportunities. Sometimes she is so fast that even Marvin cannot keep up with her. Marvin is very critical about security measures that have been applied in the past. Marvin points out that security has not received the attention that it should have. He is convinced that the bilateral aspects of only protecting IP connections is tedious and offers little benefit. Looking at the protection of complex APIs, they are missing the required granularity and may lead to unwanted access routes. This does not enforce security best practices in a company. The old and outdated approach to security is completely unsuited for rapid change or to establish new data channels. If you step deeper into the IT architecture then you could depict a picture like the one you see here. In the past we defined trust parameters within or around our installed networks with artificial borderlines, DMZ zones and several domains. This happened in the belief that it is important to protect your enterprise from external threats. And although you were able to protect your enterprise from external threats, it is now a fact that most security attacks are successful because of internal actors. This is sometimes on purpose but in most cases because of some malfunctioning by clicking on a link of the new phishing email or even worse by following instructions from a successful social engineering attack. The bottom line is with the old perimeter approach your enterprise is only protected from about 30% of the attacks because you only protect against external threats. This old approach that is based on trust parameters with artificial defense lines is too statically designed. You have built your defense once and then you hope that you can run with this defense forever. What happens if a new requirement approaches the IT department? You are introducing little holes into your own protection measures, meaning that you will also generate security exceptions. Most times adding business functionality is more important than getting the security right at the same time. In the end you may have built a highly sophisticated security gateway but everybody can walk around it and doesn't need to open any door to get to your data. From looking at this example we see that the trust perimeter has changed. The security of the future is looking at fragmented information flows that need protection for all participating systems and actors. The process behind the data exchange is getting more important than before, securing it from end to end. One major shift therefore is the ability to authenticate and authorize at each given step of the information flow. Instead of artificial trust border or parameters invented by ourselves, you are now using access policies on your data objects. The difference is that access policies can be evaluated at each single step or component. You may still use for example external and internal access attributes to differentiate between cloud components or on-premise deployments. With access policies you are able to define new trust levels for much smaller groups, even enforcing rules for single data objects. This comes with a double advantage for you. On one side it is possible to establish fine grade access control, for example in an API. On the other side it also means that you will get more insights about your IT and more insights about your IT, your data objects and your interactions also means that you are able to minimize the risk beyond the current status quo. The standard model of zero trust means to establish a simple rule. Never trust, always verify. Of course this includes a high level of security automation and that you are able to exchange and handle digital identities correctly and efficiently. Zero trust and access policies are important components if you look at the security requirements for ecosystems. The enterprise that you are working in depends on interactions. With other players and these interactions are one main driver for your future IT architecture. Your devices and software consume and produce data at the same time and for each data object that has been created there may be different data owners. If one of your partners fails to establish a good security practice then most likely it is also possible to attack your company. Maybe it is good to have an example at hand. Consider that you would like to exchange digital twin data or your machines need maintenance from a different company or from a vendor. You need to be able to grant access rights to these external partners on the data objects and machines they are allowed to see. All other components of your IT should remain invisible to them. You could also consider the new work example that we are now all living in. In a zero trust setting an employee may work from home just the same because his access policies allow him to do so. If you establish your zero trust approach you are able to change your access policies in days rather than in months giving you an increased incident response time. It also enables you to change your data connections for example between two different data providers at any time or it enables you to compare two different service providers. Zero trust is an enabler for your company to adapt and survive. For us it is important to view zero trust architecture not from a pure technical perspective but from a business perspective. It is a model that gives you reliability and maybe this word sounds a bit strange with all the agility around us but it gives you reliability in terms of legal, economic, environmental and social perspectives. For example, zero trust architectures enable your GDPR compliance. Knowing who accesses your customer data is just one side effect of a zero trust architecture. Being able to switch between a service or product strategy is a business enabler and it is your enabler to upsell additional services or access to data resources. In the operational technology it allows access rights and it protects your employees from cyber attacks. With zero trust you are investing into the resilience of your company. And last but not least we all live in B2B ecosystems and multi-tenant environments. The time to hide behind a castle wall is long over and it will not come back. In summary, a zero trust security approach pays in to many business aspects giving you a sustainable advantage. Zero trust architecture is an extension to your existing security strategy and not something completely new. However, the focus is shifted to data objects and how they are used in terms of their business value and not on network topologies. There are ten principles that we can look into briefly. Know your architecture refers to ISMS tooling or an enterprise architecture map. Create a single strong user identity refers to use strong authentication for your employees and you may want to extend this with a look at privacy. Device identities may be a surprise but it is important that each device uses for example a TPM module and has its own digital identity. Authenticate everywhere refers to your current infrastructure security. Only allow authenticated IP connections. The next two items go along with ZM measures. Monitoring devices and services and knowing the health status is important. From then on, topics like policies and access control are new zero trust principles that begin to shape your strategy as you apply them into your company. The last item, true services designed for zero trust, is defined a bit too wide. From my perspective, this refers to the microservices and the granularity that you are offering with them. Don't mix too many objects or resources into one interface for a start. This can be done in a later step by aggregating data with extra policies. The importance of zero trust architectures in our opinion will increase. We collected a few links that will help you to get started with your trust architectures. And although the maturity of technical zero trust frameworks is often not yet given, it is worth looking forward and incorporating the new aspects of it into your cybersecurity measures. Let's see how Eliza applies zero trust to the data sovereignty. Hi everyone, it's Eliza. Now, hand in hand with the principle of zero trust comes data sovereignty. And according to data sovereignty now, it can be defined as follows. The capability of an individual or an organization to have control over their personal and business data. Now this entails that you are able to reuse the data at other places. It also means that you should be able to know which party holds which data, under what conditions this data is held and where it is kept. Here at Pilar, we have joined forces with the International Data Spaces Association, a nonprofit organization aiming to enable data sovereignty and to foster the secure exchange of data. In this context, the idea is A defines important roles that can be used to analyze your setting. Now to make these roles a little bit more vivid to you, let me take you through an incident in an ideal security environment. Today, rushing to work, Calamity Chris has had an accident with his motorbike. Luckily, he was just around the corner from Dr. Sith's practice. Apart from taking care of Chris's injured wrist, she needs to record information, starting with personal data like his name and his social security number. Quite obviously, Chris is the data owner of this information and Dr. Sith is the data consumer. Now due to a legal provision, Dr. Sith is obliged to keep his records for a period of 10 years. She still gives Chris full access control over his data. This way, Chris can view his data at any time and he can issue additional access rights to others. He may want to grant these to his social security provider or perhaps he would like to pass the data on to another doctor for a second opinion. Maybe he is also interested in an application provider that can analyze his data. To do so, Dr. Sith uses a cloud service or a service provider. As she has heard of data breaches at cloud service providers, she makes sure that Chris's data is encrypted to keep it confidential. She also makes sure there is a backup of his records. Dr. Sith also gives a medical diagnosis and she refers Chris to Dr. Bohn for an x-ray, who will again generate further data. Dr. Bohn has become another data consumer, but still some information needs to be disclosed from her eyes. At the same time, Dr. Bohn owns the role of a data provider and needs to send data back to Dr. Sith. More importantly, she also needs to inform Chris about the data she has created and that the information is shared with Dr. Sith. Now to make things a little bit more complicated, Dr. Bohn and Dr. Sith are using different data storage service providers. By the way, do you still know where all of Chris's data is kept? Only Dr. Sith or whomever else Chris has decided to share his data with, for example Social Security, has access to view his data. Chris can share access rights to specific parts of his data, if he wishes. Regarding a medical condition, it may be necessary to grant legal authorities access to some of his data. While authorities can contact Chris in an anonymous way, it is not possible for them to draw conclusions about him and his life. His privacy is respected. Legal authorities can be seen as a second data owner and this might be the case due to their obligations in disease control. Unfortunately, now I need to take you back down to earth. First of all, even this was a simplified scenario. On the technical level, for example, we have left out the IDSA roles of broker and the vocabulary provider. Secondly, for sure, this is not the real world as you know it. But wouldn't it be neat to just pass a specific piece of information on and know who last viewed your records? Today, digital identities and record protection are not yet implemented in such a way. We believe they could and should be rather sooner than later. We also believe transparent data communication in a zero trust environment with data sovereignty should be available to everyone. Now, furthermore, we believe we have an answer to making this scenario a reality. Since 2016, we have developed our open source and zero trust messaging layer in your appeal. It embraces the concepts just mentioned and enables you to exchange data in a secure and sovereign way. In terms of the international data spaces, we think that our library and protocol definitions match the connector, the software that protects individual records. Furthermore, we have left IDSA's broker aside. It is meaningless in context of a decentralized identity space. With your appeal, it is easy to set up a secure digital data space. Most importantly, we have also received funding from the Next Generation Internet and the NLNet Foundation. The discovery and the PubSub encryption have been implemented with their help. We'd love to bring your appeal and its benefits out into the world. To do so, we are seeking to build a community and are looking for like-minded partners. Calamity Chris's medical case would be a perfect example for collaboration and implementation for medical data records. But really, any other use case with exchanging sensitive data is ideal. So let's explore how we can support each other and exchange our thoughts on data sovereignty and zero trust. Thank you for your attention and we're happy to answer your questions and get your feedback. Enjoy your virtual stay here and we hope to hear from you soon. Hi, Stefan. Thank you for joining us today for the Q&A. Hi, Praveen. Thanks for the opportunity to be here. So if you have questions, feel free to send it in the chat window. I did see that there was this interesting question from Joni Titan about the name Neutrophil. So do you want to explain what's behind the name? Yes, so the name is coming from biology. That is true. So the Neuropil is the grey matter around our brain cells. And this is actually also a good explanation of what Neuropil is or what it is not. So we are not the part which is analyzing data. We are not the part which is adding content. We are just the part which is transporting the information with the least minimum possible amount of data that we need to know. And that means we have to do a lot of things that usually you don't find in messaging systems. And we are embracing several concepts in terms of identity federation coming from the zero trust setting. I heard the talk before using the bitvectors. We are also covering some of the ideas mentioned there. So this is our focus. It's about data and transit and protecting it also over one hop with end-to-end encryption. I see another comment from our fellows saying that it's all revolving under the ideas of the solid principles from Tim Berners-Lee. That is correct. I know about solid, but solid is embracing around the concept of the web. And what we do with Neuropil is we try to basically avoid to know all the possible data models that are there, but we still need to understand them. That is one concept that we have added as part of the NGI0 project. So we are able to capture certain information from the data models and then still do the routing of the data records to the corresponding peers. But that is also ongoing work. So you could help or you could join to implement it. Someone is typing a question, I believe. Let's see. Meanwhile, for a traditional organization, if we want to go ahead on the zero trust architecture side, how do we go about it? Where to begin? I have seen you have shared a couple of resources from the NIST and one of the wonderful books from Aurelie. Is there any additional resources somebody should pursue? I think these are the latest resources that I have found. So zero trust architectures is really looking from the organizational point of view onto your IT security strategy. And as I said, it's nothing new, it's just an addition on top with the access policies. So it's worth looking at them and then trying to embrace them. And yeah, we think that with in Europe we can support them. Are there any more questions? By the way, if we are not able to answer any more questions and the time is ending, then what would happen is the hallway truck will obviously open and a new room will be available to have a chat with Stefan. The boat is really unforgiving, he will watch the clock and then cut. Yes. Of course it's open source, otherwise I think we were not here in an open source conference. I did not look at the source code or the repository. What specific open source license are you following Stefan? We are following the open software license, OSL3. Okay. There are actually two repositories, one is on GitLab, that is our development repository and the other one is on GitHub, which should only contain or which is mainly used to host the stable versions of our development. I'm looking at the page for your talk, but I do not see the links to the GitLab repository. Do you have the repository too? Yes. It would be great if you can post the GitLab repository URL. One second please. Sure. Thank you. Yeah, there it is. Stefan has just posted the repository URL for the new URL source code. I'm just interested in having a look at it, feel free to do so. Now there are more questions for Stefan. We are pretty close to the end of the talk. It will be about eight minutes to go. As I said earlier, after the Q&A window is over, we will open up by hallway. Are you speaking in another lightning talk today or are you hosting it Stefan? No, I'm hosting a lightning talk at two. So beware that I'm 20 minutes, I'm not available here in the hallway, but then later I will be here again. Do you have a Twitter ID or if somebody wants to follow you, where should they reach out to you? We have Twitter IDs of course. Many of them. Alphalons has another question. Is peer-to-peer networking considered into NeuroPel? Yes, we are using a distributed hash table as well, or sometimes I'm rather looking at it from a mathematical point of view. But there is a distributed hash table in it and each peer has a random identity and an additional user supplied identity on top with an automatic cross-signing. We have five minutes to go before the end of the Q&A session. I used. Yeah, there is a Twitter ID if you want to follow Stefan's work. There it is. I think there are no more questions and it's just four minutes to go. So probably we can disconnect now. So you can join your next duty as fast as you can. Okay, thank you very much for joining Stefan. Again, for the organization as well. Thank you. Thank you. Thank you. Thank you.
|
This talk will give an introduction to the up and coming concept of Zero Trust. It will briefly point out shortcomings of security in the past. We will discuss why there are signs for a paradigm shift and illustrate what security of the future looks like to us. We show different approaches and concepts and share our vision of how we believe data sovereignty can be established. We hope to exchange thoughts and ideas with the audience to make this a valuable and interactive talk in which we can all bring in our knowledge to build secure digital environments.
|
10.5446/52257 (DOI)
|
you Hello, I'm Philip Beidle. I've been working with Holochain since about three years now. So pretty excited that it's all coming to fruition. And over the last few years, I've spent most of my time working in the app developer space on how to actually build applications for Holochain. And I've spent quite a bit of time turning my brain from a centralized programmer to a distributed programmer, which takes a while. And I've also been doing the same thing quite a lot. And there's building a building any application you know when you start, it's quite difficult if you start have to start from scratch, you know blank piece of paper, very difficult. And then if you go and try and build your own custom bespoke things, you know, it's hard to get support and that sort of thing. So what I wanted to do here was use existing technology like the view CLI service and their presets and plugins approach, which is awesome to be able to make it much more productive for people, especially people who don't really get into like programming that much and they just want to like build an application and not really that into the code, but they want to be able to create an experience for themselves. So what I'm going to show you here is actually a view, beautify and Holochain application that I built to build up there are other applications. So it's called builder. So let's go and build something Joey. We go here to new Holochain app. And what this gives you is like a shop or like a library or, you know, store whatever you want to call it of different applications and here we've got some different categories. I've got this basic one here, I got a profile sites. So I've got a couple more here. So let's say I wanted to build a professional profile site for somebody like click on this one and have a read you can see that there's some nice screenshots shows you what you would get if you use this template as a starting point. And then there's a bit of a discussion, a bit of a description about the actual product down the bottom here it's going to be actual reviews of the actual product. And very important thing that we learned on the dog meat project that I was part of for quite some time is that you really need to be able to explain to people and give them confidence in their developers that they're buying things from that, you know, this is not some person who just built something and then they disappear and you never get any support. So that's very important here. Right, so let's go back to the shop again. And what we're going to do is go back to this basic one, which is called simple. And this is a very, very simple whole channel application has a very simple entry that we put in there and it just does create a great update. There's nothing, no permissions or, you know, personal data management that sort of thing around. And so I'm going to call it FOSDEM because it's this demo for and we're going to install. And what this is doing is you'll see here that this is the exact same output that if you ran this preset in a normal command line, this is exactly the experience. So more advanced developers who are comfortable with, you know, visual studio code and a command line can use exactly the same function I've used here and the same presets in the in a way that they're comfortable, which means that you're not tied into how I've built this builder to use it anybody can build it. And also, you'll see later on that if you edit things of visual studio code, you can actually get those files back into builder as well because builder is a file editor. What works is this front end browser is talking via a web socket to a node server, which is doing all the file writing. So it's done. So create application finished. So let's have a look and see what we got. And then see all the files that were just installed from the plug in the Holochain CLI plug in that I use in the preset. And on the right here I've got code mirror configured for the different file types, Mark down, Jason files and JavaScript, etc. The main parts of the application are a front end. And this has views like a homepage and when we go to the things it's got a store because we're using the view X central state management. And here you can see that very typical stuff, you know, going to initialize the store. One thing that I have done is use Dexie to wrap the index DB in the browsers so that we can store information in the browser, which gives a nice fast responsive to the people that are using a rat. And it does the style where we revalidate patterns. So you'll instantly get back what's in the index DB. And then when Holochain updates the information that gets updated. We're using routing as well. So here are the routes for the application. When we go to things, we'll get to see the actual view things. And there's also some public routes as well. And you'll see that in both the store and the routes. We have literally the same index file. And what this is doing is it will say if you drop another file into here called my new app dot store dot j s. This will automatically get registered and you don't have to write any code, which means that you can add things in much easier. Right. So that's your app, the front end. Now, the back end for Holochain, we use what we call biomemigry as a way to describe things. Because we figured that the nature is pretty good at running a decentralized system. And so we follow the same ideas, which is why we have DNA for applications. We have zones, which is short chromosomes for the individual pieces. And then you'll see later on that we have cells and other other words in that area. So this is our DNA and you can think of this this lip file as essentially like your API exposed API that you can call these functions from your down the WebSocket to Holochain. And so you can see here I can do things like create a thing I can delete it and I can list it. And if we go and look and see what I think is it's here. So this is the distinction between thing entry and thing is that thing entry is the actual piece of data or information that is going to be stored in Holochain as an entry and thing is that exact same thing but also with the entry hash. Now the entry hash is the content addressable system address of the entry. So I can use that address to call back the entry later on if I want to update it or delete it. We also have tests that are set up for this. So the thing you can see here that we're using our own testing framework called triorama. And what this does is this will automatically install the DNA for you into a triorama conductor, which is what we call the running instance of Holochain because it does all the orchestration of agents and applications and all that kind of stuff. And then we spawn players with this config. And then you can install the apps onto your player or your agent. And then once you've done that you can use the cell ID to actually call the function so here you can see that this is the zone called the zone called simple and the function I want to call is creating and we're passing in this object. So the cells there is more biomimicry. What happens is that when you install a Holochain app, you get what's called a cell ID and that is a hash of the code. And that is unique for that for the code. And to call the function you need to pass it an agent, an agent key and the cell ID so that the Holochain conductor knows which one to call. And that's essentially it. There's some tools you own how to actually build the app. So when you run in the build, this is some information that the Holochain builder uses to actually create the DNA. Right, so now that we have a DNA, let's go and test it. Now the first time we run this test, this is going to take a little while because it has to build a pull down all the crates for the Holochain, including the HD game things like that and it compiles them. So the first time it takes about a minute or so. But after that it's 0.1 seconds or so. So I'm just going to close that while that happens in the background and you shut some of these tabs in my editor. Clean things up a bit. And so when that's test, when that's built that will build what's what we call a DNA file, which is a zip file. And that is the file that you would install into the conductor. So let's get back and see how we're going. Right, it's taking a little while. So I'll explain what the dev conductor is. So we have built, we've got all the files now for this website hooked up via a WebSocket to Holochain. So we'll have to run the front end stuff with the view CLI service for serving an app. And also again to run a development conductor. So the reason for that is that I've actually got another conductor running on here, which has got my real information in it. So I don't want to use it as my development environment. So when I run the dev conductor, what happens is that's uses the again uses the node service that's running through the WebSocket and spawns another shell, which runs the Holochain conductor inside there. The advantage of that is that I can start up and stop it, I can reset it as I need as I'm developing. And the app server is actually going to spawn four versions of the website because Holochain is agent centric. So you want to demonstrate and understand your application from multiple agents point of view and see how they interact. And I'll show you how that works in a minute. Right, so you can see here that we're nearly finished the tests. The last one, correct and delete. And actually done in a sec. I just want to show you how quick the build is next time. So we do that again, DNA test simple. You see that it built 0.09 seconds to do the compilation of the year at that. And the reason I show you that is because I want you to be comfortable with doing like continuous testing. So when you make changes to your DNA, you know, you have to test, etc. So now that's running. Let's go and start the web server. And there does and have starts on using form and outside to be able to start multiple instances of the website. Each of these are running on its own court, going from 4401 to 4404. And the reason for that is that when we install the DNA into the Holochain Dev Conductor, each agent that we have so we can have up to four agents, you can have as many as you like, but I've set it up for four because it seems to be a reasonable number to experiment with. Each of those will have its own website front end and they'll have their own agent key, but they'll all be using the same cell ID because I've all installed exactly the same exact as I'm doing a zip file. So when that gets installed into Holochain returns the same cell ID. And what happens is that each of the agents then will pass its agent ID and the cell ID to Holochain and say, you know, please run this function for me. Holochain knows who's who then, which means that when I when agent one makes an update, then I can go and do like a refresh with agent two and I'll get that information back up into agent two. All right. WIPPack takes a little while to run on this first one. So what we'll do while that's running is, oh, it's nearly there. Look at that. Nice and fast. And we are done now, I think there we go. Cool. So we've got four websites running. We can go and start the Dev Conductor. And you see here conductor is ready. Cool. So now what we've got, we've got a running front end there. We've got a running Dev Conductor. We just need to install the apps. So let's go to the agents. And let's add an agent. And one. And give me an image so you can tell the difference. And agent two. Two different agents. You could do up to four, but you know, you get the idea. So here I can press this button and that will create me a agent key inside Holochain and then sends me back the public key. And now if I hit this button, this will install the DNA zip file into Holochain and returns a seller date. So when I hit this link, you'll see in the URL I passed across a encoded by 64 version of the buffer. And it's got the puppy there. And if we go this way, you'll see we have the seller date. And it used its nickname, which was called simple, so that I can differentiate. And also it's got the port that the application is going to interact on. And this is the like the shell website that was created when we did the, when we chose the simple one from the app store before and tells you get about what this simple app is about. I'm not going to go through this, but you can read this later if you like. Also tells you what to do, which is click the things link at the top of the page. So let's do that to go and interact with our actual Holochain application. So let's clear it out before it did I right. So let's, I'll, we'll add a new one. Where is foster, it's bigger. And this is my new entry now for go to the reason for that existing one to, by the way, is that I didn't clean out the original DNA from my previous test book. I did this before the demo and actually put that value in high because I then reinstalled exactly the same code, which produced exactly the same hash for the code. And then it's the same cell ID. And when I rerun the conductor, it reinstalled rerun the, that DNA, which had that value in it. So you can see that you can start and stop the conductor. And also the, the thing I think is really cool about this is that if the code is exactly the same, the address the cell idea will always be the same, which means that if somebody modifies the code in any way, shape or form, you get a completely different cell ID and they, which being agent century, you should do that. What if other codes your heart's content, no one's going to stop you. But you'll be by yourself in your own DNA, which is fine. You just need to then have other people invited to the DNA and join you, which means that agent centric programming is very much about do it at your like, you can do it if you want, no one can, no one can stop you with a cryptographic autonomous license, you have your own keys, your own data and your own software, you can do it if you like. At the point where you want to interact with others, I share or bring them into a group. That's when Holochain comes in. And that's where Holochain enforces the rules of the group. So if you've got an app, and you've changed it, that's fine. It just means that the people then using that app with you, agree to that set of rules. Which I think is really cool. Alright, let's go and see what happens when you have another agent. So I'll install the key for them, install the app. And again, let's go to things and there you go. Now you see that quick update there. That was because I've got other records from previous demos that are still in the indexed TV in the browser. But what happened there was that Holochain updated and updated that for us. So I can go in here and change this record. This, save that there, I'll go to the other agent. And then you see we've got the update here. Cool as, right? And there you go. Fully working operational Holochain application done with Vue CLI service and a bit of Holochain. Now, that was kind of cool. It was a bit simple. So what about if we go and reset the conductor, which I should have done last time. Let's go and add some more functionality show it. Right. I want to. So here we've got one DNA called simple. Right. Let's go and add a module. Now my deal with here with modules is that you can add fully functional pieces of functionality that don't need a whole website. So you can then start off with a shell app or like I've done there with a simple one and we can just keep continuously adding more functionality. And because of the way that the routes are set up and the store set up and the views are set up, you can literally use the Vue CLI service again and do a Vue invoke on the plugin, which is what we're going to do here. So again, here we go. We've got some different apps. By the way, this is the elemental chat that I built a little while ago and has been modified by the team to work on the whole and we're actually testing the whole hosting distributed hosting network with this app right now. Super cool. Pretty proud of that. Right. Got some developer tools, you know, what if you wanted to plug in a code editor to like builder. Yeah, cool. Of course you can do that. Because it's a module. But what I want to do is show you how I can add in some invoicing. So ledger is an app that I've been building and it allows you to have invoices, contacts, clients and kind of stuff. Good description here. We've got all those ratings. Oh, geez. 4 out of 5. Not too bad. Let's install it. Same approach. You can see here that it's pulled down the jump train. It's pulled down the CLI plugin, Holochain module ledger, which is on the Holochain GitHub. So you can go and access it yourself. And then you can see it's on the completion hooks. That's all done. Right. So now our app has. Look at that. Two DNAs. And I'm just going to start to build a process of that one. Or the testing of that one while we have a look at some other stuff. So it's doing exactly the same as the test before. It takes about a minute to run. And then we have two DNAs in here. And you can see that the zoom in this one is called ledger. And it has a entry type that I've built called client. Some different fields, you know, country name, etc. Very similar handlers to the thing when we looked at before. And we also have some front end stuff. So you can see here that it added in an extra store module. For the ledger, which does the, I can show you how this is how the WebSocket to Holochain is connected. And then when we do things like fetch the clients, I can call that HZ client and do call zone. And here you can see I'm passing in the cell ID, which is set up here. So the cell ID is what comes out of the URL, which is the ledger cell ID and the agent public. We also have some extra routes. So the droughts here we go. And some extra views. So the clients, contacts, experiences, etc. So let's have a look at how that test is going. Now, the advantage of having the routes and the components and things like that is I literally just made a template and dropped it on my existing application. And then because of the way the Vue CLI service works, we serve, we don't build it all together. And that's all going really well. So once this is built, what we do is we're going to start the dev conductor. So let's see if the app service is still running. Excellent. Last one coming. And we'll test it on. Excellent. Right. So now we have two zip files. We have the simple, simple DNA zip file and now one for ledger. So what I'm going to do is add it to our agents. So here we can create a key again, have to create new key because this is a brand new instance of whole chain conductor because we wiped out everything from before. Install that. Now you'll notice that when the URL comes up, you'll see up here, we have the port. The party, we've got a ledger cell ID here. And the simple cell ID. And what's really cool about this is that when you install whole chain and install multiple DNAs at once with the same agent key, you are effectively the site, you are effectively the owner in all the DNAs, which means that we can then start calling from one DNA to another with the, with all the authority that we need to get the information or do whatever we need another DNA. This makes it really easy and super secure to do really modular apps like we've done here. So here we've got the things that so just put one in here and say thing. And let's go and open up this other agent. In a to do is a things called right. So, if you're going to look at the routes in here, you'll see that the ledger. If we go to something called ledger invoices, we'll get access to that new one. So just go to the sources. And here we go is our invoices module that we plugged into our existing app. You can see that the URL is the same. So it's part of that app. So when compiled together, you can do your young builds and all that kind of stuff. And that's creating your client. So if we do the same thing, let's not do that. What I want to do is it's a bit clunky typing in routes and stuff like that into the URL. So what I want to do is I want to incorporate the things component into the URL. So what we'll do is go to the route for here. So this is going to be to get the right number of brackets. And we're going to go things, the path. We don't need to pass there. And this is going to be called things. Things, funds. We don't have authority. And what we want to do is I want to be things view into that when we go there and this is now going to use the layout. I'm going to work on something. I've missed the brackets on there. So this we use this layout of the drawer down the left and the component in the middle. And it's going to load up the things. So let's go and update the navigation down the left hand side there. So here's this is the drawer that you can see on the left here. And we just need to add an extra entry. So we'll go add one of these. And we'll call this things. Let's give it the idea. So there's some really cool icons in the material design. And we've got that button. So we'll get rid of that. And I think that should work. Oh, God, make spaces. Simple routes, wrong line, line 12. This is why you have linters on all the time to make sure you don't end up writing rubbish. Here we go. So we've got our things down the left here. We've got the wrong URL there. So I'm just going to change it to bracket. Here we go. We've now incorporated our things component running on its own DNA inside the ledger. So now you can see how we can mix and mash all sorts of different functionality together. And I think this is a pretty neat way to build applications. And don't forget, if you don't like using my builder, that's no problem. You can do it all with a command line. It's just using the CLI service with plugins. And I think we are done. Thank you. Hope you enjoy it. You can join us in the more live questions and answers in the question and answer room. Thank you, Phillip. Yeah, that's a good talk by Phillip. He has maybe answered all the questions in the chat itself. He will join here soon. Let's wait for him. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. So we have any questions left? So there's now a version control system built in. So you get like the normal, like good kind of ability to branch and commit and that sort of stuff. Except it doesn't use GitHub. It uses Holochain to store all the commits. And it also has a way to publish modules that you can then plug back into the builder as well. So you can like, I'm going to record a new demo. So you like get the simple app modified into a notes app. Then you can turn that into a notes module and then publish that again. And then somebody could add the notes up to their app. You can't see any questions right now, but yeah, we're happy to discuss about your more about this. So the whole engine which you see this will be GitHub is being in building and you don't need to that stuff. So how all the features are you been providing over here or just that will be the upper layer image of that. And at the moment it's only got some branch commit and merge. But that's the basis for it. And the idea that is that you can start, you can create an organization and that is the limit of the people involved for the version control. And they're the only ones that can access it. And then you can kind of like when you do on GitHub, where you have like, you know, you've got to be part of an organization to be able to do the commits and that sort of thing. Same thing. I'll get around to doing things like pull requests and stuff like that as well. But I find that I've cut down a lot of the features of Git because most of them confuse me. And I just basically I built the features that I need to work the way that I want to work and it's nice and fast like the number of times that I've like done a Git checkout and change branch and then try to do some work and it's like what's going on here. I can't figure out why is it broken. And that's because you know the node modules for the needs updating or something like that. So I built in those kinds of things. If you change branches it wipes all the files rewrites everything reruns all the installs and stuff so that you have it's much more dynamic I think. But mostly how it goes. Yeah, that's that's good. So we just wait for some of the few questions of the folks on the chat just throw that. So where do I go back to you. From here just wait till the time is over by like three for the questions. Yeah, just for the few more questions that they are. Yeah, one of the big things that we're working on right now is the idea of derivative work. So when lucky if you get an existing piece of work and then you modify it and then you republish it how does it currently the way MPM and all this kind of stuff work. It's really difficult to reward the people who did the work and there's no mechanism for and there's no tracking and that sort of stuff. So one of the projects I mentioned before is value flows and the concept of that is that everybody involved in the value chain gets reward. And so if you install one of these modules. Part of the, you can see the value flows equation. And what you have to do is accept the fact if you want to install it you have to say yes I accept that I'll be part of this value flows. And I will, I will contribute the value as it goes down because let's say with the notes that I am installing the simple app, which is what I started from somebody built that. And now that I've gone and modified it into a notes app and I've republished that and other people using the notes app. A lot of the work that I have taken credit for basically it was done by the simple app. And so the person or the organization did build that should get some reward. And currently it's really difficult to track all that stuff, but Holochain and value flows and the HoloREA project should make that or are making that really, really easy to do. So it's just a normal natural part of development is like, cool, build something or look there's like something I could republish in that you can republish that put it out there knowing that you're conforming with all the licenses and stuff like that. So everybody involved gets some value. That sounds good. So you're just making also in voicing up. So it's basically will be like the documentation printing in the formatting of will be a strict one or the user is strictly able to change the format like in different countries, the structures are different and different taxes and all that stuff. So how do we be going that? It's really simple. You just you have a list of clients and organizations and you create any of the consultant profile and then you create invoices and you just add line items to that invoice. There's no tax. It's not a business management tool yet. I built it so that I could build my build one of my customers with an app that I built on Holo. So I did that last week. So that was pretty cool. But the idea is that it's like, here's like the basis for an app and then you can take that and go right. Actually, for me because I'm in Brussels, I need you know, VAT and for me because I'm in Australia, I need GST and have to report every three months and all these kinds of things. So you can take these core things and build them to whatever you like. And then then you could actually, well, I can say I built it for Australia. I could then republish that as like, you know, running business in Australia app that took from the original ledger invoicing app and the value flows of, you know, the money that all the value that comes in from me building this new Australian version of the app. Some of that would go to the person who built the original one. And if you built one in Brussels and some sort of thing. So then it allows, what I'm trying to do is like push the development and the evolution of things right out to the edge. So we're not reliant on big development companies to build these huge apps. We can just go, I just want that small thing and that big and that bit and that bit, particularly for me. So I've actually built another thing called my Holochain as well. So you can install all these as running apps, which is a bit different to like going to a website and using their app, you know, like he is mind your own business or quick and or whatever you go there and do that. And then you go to Facebook to talk to people and then you go to other apps to do all these other things. With the with the my Holochain, the idea is that you go, I've got one location I go to, and I can sort and arrange using columns or all this kind of stuff, all these different apps so I could have an app for doing my business. One for doing chats and I can say, well, these these five chat rooms or these five chat groups are like family. These ones are work and arrange it all rather than being told how I should experience these things. I get to be the one who drives it. I really like a term that we come up with a few years ago for Holochain. It was basically bring your own internet, bring your own experience to the internet and you don't have to be told what to do anymore. Yeah, that's that's give you freedom that's yeah. As we talk. Yeah. We still have five minutes folks. So if you have any questions just dropping me will be happy to address them. Yeah, I posted the link to a GitHub site for this builder app and there's a Docker file in there now. And if you build that Docker file, it builds an entire, it gives you everything. All the Holochain stuff gives you a way to build all your DNA is it gives you builder and you just use a browser to go to a go to the URL. And so instead of using things like electron to get file writing, I used a socket. So the browser talks through a socket to a node service and the node service does all the file writing. So it's just way more convenient than electron. So we have one question from there. Yeah, I see why do you choose to work on Holochain? Okay, the answer to that is so that I can look my 12 year old daughter in the eye and tell her that I'm doing everything I can to make sure that her future is good. So I don't want to throw it into the, I think it should be good. I think you're linked to my website. I'm also an anarchist. I'm an anarchist in the form of chaos. I mean, anarchy is in that rule of, you know, be accountable for yourself. So we have another question by Tixxin can app evolve and this keep their original data. So with a monotonic system like Holochain, you can't delete anything. So you can't delete a record like you can in the database. Because it's monotonic. That's the whole point. But what you can do is when you migrate to it and you can't force anybody to do anything. So it's not like Facebook who update their website 48 million times a day for all the different people and like you get no choice. What happens is that you may publish a new DNA for your app and go, I hope people use this. And when they migrate to this new app, what you can do as part of that migration process is you can do a bit of garbage collection to so you can say. I'm not interested in the team stone data that's in the existing DNA. I just want to bring across the new live data or just the last two weeks or whatever you want to do as part of your app. But you can't be you can't guarantee that everybody has moved to that new app. You just can't because it's agent centric. So which is really the whole agent centric thing I think is difficult for people to grasp. So a kind of this way of describing it where as a programmer, I can do whatever I like on my computer. There's nothing in every you can do to stop me. I can do whatever I want. Because I'm my agent. I can do whatever I want. But at the point where I want to share where I want to share, where I want to share, where I want to share. Thank you.
|
This demonstration shows how to use the Vue Cli presets and plugins we built for Holochain to create a fully operational distributed p2p application in minutes. Running yarn start gives you, the developer, a Holochain Conductor admin app for managing Demo Agents, installing your new app and launching your app with the crypto keys for each Demo Agent. There are four web apps launched making it super easy to see how your app really works for each Agent. That's not all, using the same technique you can add "modules" of functionality to your app plus you can add new layouts, views and entry types. Come and see how easy it is to build a fully distributed, p2p, secure, fast, reliable and great looking app for the new world!
|
10.5446/52259 (DOI)
|
Welcome to this talk on Bats in the Browser. My name is Anders Wunjensen. I have a master's degree from Albany University in 2006. I've been working on SecureScotlbot for a few years. The first part of this talk is going to be focused on application architectures. One of the first architectures we're going to look at is the standard web architectures that you probably all know where clients are connected to a service over the internet. Some of the features of this web architectures is that you have these low-powered clients and FAT servers where most of the code is running on. These servers store all the data. They're centralized and the UI is standardized and you do this continuous upgrade where they keep pushing out changes. If you don't like the latest changes, then it's just too bad. In most cases you can choose another service. One of the features of these systems is that because they have massive amounts of data, they start doing machine learning on this. It's important to look at two different kinds of data. There's the user-consented data and then there's the tracking data, for example, of your mouse movements or how long you've been looking at this page before you went somewhere else and so on. All of this data is fed into these opaque machine learning algorithms. They can be seen as black boxes. From an outside point of view, you have no idea how they work and they can be changed at any point. In the same way that proprietary software had an attack vector where you can just put in, for example, code. Nobody will know that it is there and maybe people don't even know exactly how these things work based on the data, for example. Then it can be an attack vector if people can put in, for example, data that will change these algorithms and then start showing different content to these different users. Another way of building these systems is a more decentralized and federated way. Two examples of this is email, which is now really old and master done, which is a more modern example of this kind of system. If you look at these federated systems, then instead of having mostly servers run by one company, then these can be characterized by having multiple parties running these different servers. That would then be in a federated way, exchanging messages between them. The data is still on the servers. But one of the nice things about this is that it do foster quite a lot of diversity. And people are able to do local governance of these different servers. You can have communities around different aspects, for example. This is a lot nicer than the centralized approach. But these systems do have a tendency towards eventual centralization, as can be seen, for example, with Gmail as a good example for this from the email world. Another way to build this system is a more distributed way, where Scuttlebot is one example of this, that is another, for example. In a distributed system, it's more characterized by having thin servers and fat clients instead. The data is local first, meaning that it stores locally on the device, and then exchange, and the UI can then be adapted to the community because it's running directly on your machine. I just want to point out to this really excellent article from a few years back from Inkedon Switch that some of you might know, which talks about some of the aspects of local first software. Because it's running on your machine, then it can be really, really fast. You don't have any network latency. You can have multiple devices using the data. It can work offline, and you can build collaborative systems using this. And another really nice aspect is that it has a longevity aspect to it, because you are in charge of the data. You can have it as long as you want, and you're not at the mercy of somebody shutting down a service that you've been using. You can have more privacy as well, because you can say when and how you want to exchange the data, and you have more user control. So in summary, you have to be mindful of the implicit power structures that are in the architectures of how the software is built, and there are different approaches, and it's not like one is necessarily better than the other ones, all of them have Pro and Cons. So let's look at Scuttlebutt. Scuttlebutt is actually an image of Scuttlebutt with the different nodes here. And the red dots are earlier nodes in the network, and the blue are new people in the network, and it's fascinating to see how these different connections mean that people are connected in different ways, and it's not centralized necessarily in any way. So what is Scuttlebutt? Scuttlebutt is an identity-centered system, meaning that it's very much focused on people instead of, for example, directly messages. It's a local-first system, so you write data locally on your machine first, then it's a front-work network, which is a bit similar to a social network, but it's not necessarily the same way. You follow people, and you can block people, and this has meaning over how data is exchanged. It's exchanged by gossiping with the nodes that you connect to. So the main building block of Scuttlebutt is the feed, and an identity has this feed where the first message has some different content, has timestamp, and you can put in whatever content you want in this, and then it has the author, and then it's signed by the identity. Then as you add more messages, they are linked together, and this means that when you do get the messages, you assure that no messages in the middle, for example, are left out. And from this, you can build this kind of event sourcing system where you structure changes to a system as events, and then you have these feeds of immutable messages, and it's not like in Bitcoin or something like that, where you have a global ledger that you need consensus on. This is more like everyone has their own mini ledger that they can exchange with other peers, and you build structure on top of these. The messages in these feeds can be public or private, and they're just regular messages, and larger files, for example, images, and so on, are stored outside of the chain as blobs linked by their hash. And it's important to note that only you store the private key, so only you can generate new messages, and only you can read private messages sent to you. One of the other aspects about SSB is that content is accessed by hash, and this means that because of these messages are assigned and hash, you can then link to them by the hash directly. This means that compared to a normal system where you go to some URL to see something, the hash of a certain message is totally unique, and this gives a location transparency aspect to this as well. Messages are exchanged by gossiping as said earlier, and this has many ways of exchanging messages. It could be over a local network, or it could be a pub, which is just a way to relay messages between nodes, because it has a public IP. It could be rooms where people connect through an intermediary, connect to each other, and exchange messages could be distributed hash tables, could be onion hidden services where you connect directly to somebody else, or it could be even sneaking network USB sticks. It's very agnostic to this. The way that replication works is that the feed follows other feeds, and in this way, it generates almost like a graph. So the feeds that you follow directly are in the hops one in the inner circle, and the feeds that they follow are in the hops two. The normal test of applications show messages within hops two, and this means that it's not really possible for some random stranger to send messages to you, for example, it has to be within these hops. This is a really good way to not get so much spam, for example. You are really in charge of what data you want to see. But there's another aspect of this, which is you want to push out data relatively fast. So we have these long-running connections, and we do use epidemic broadcasters to efficiently exchange what the latest state of potentially thousands of different feeds are. This is really efficient because of this identity-based ordered LOX where you can just say that the latest message for this identity is X, and then you're sure that people know what you're talking about. Another aspect of SSB is that it has this caps, which can be seen like in multiple universes. If you do use a different caps than the main network, then your messages will be totally separate from the rest, and this is a way to, for example, structure applications. So you're sure that it's not replicated within the normal network. So there are different implementations of this, like JavaScript, Go, Rust, and so on. We have a couple of different desktop applications, patchwork and Oasis. We have mobile clients, and then we'll be talking about a browser client as well. So this is an example of one of the most popular application patchwork. You can see it looks like almost like any kind of social network with posts and replies and likes and so on. You can build applications on top of this. So this is, for example, a book application where a book suggests messages posted. You can then edit them and review them and post that as messages as well. And because of this hops aspect, you get data from your friends or friends or friends. So you just see books from what your friends read, for example, not necessarily random strangers. One of the other really interesting applications, this is Howe, where they built this application on top of SSB using private groups. So everything in this is encrypted, and what's really nice to see is that they use their own terminology. So you use words, for example, like tribes and so on, which is very native to the local community in New Zealand there. You can build even Git running over SSB, for example. And then next I'll be talking about, so we started up working on the grant we got from the European Union, which is under the NGI pointer. And this is the main team. This is me, Senna, Henry and Andre. We've been working on sort of taking SSB to the next level for the past three months. And I'll talk a bit about what we've been working on. So we've been building database improvements in the JavaScript world. So we have in the order of roughly 10x improvement in indexing speed, building a new database SSB DB2, building a new fast way to query for these messages, which is GitDB, and then we're building EBT and Go, and doing some improvements to the private groups. And then some of the things that we will be working on is the rooms design, which is the aliases and different privacy modes and also party replication, and these will enable SSB to be a lot faster, especially for onboarding. Some of these database improvements is that we have built, ripped out the old flume system and replaced it with a ASIC append-only log in the bottom. And then we do store messages as a BIF in a binary format instead of JSON. Then on top of this, we have GitDB to do queries. We also have level DB indexes. And all of this is wrapped into the SSB DB2. So if you look at GitDB, it has this interface where you can do a query, for example, of posts of two different authors would be looked like this. And one of the things about this is that we did build DB2 with the goal of also working in the browser. And this means that we can have the exact same code running on a normal desktop or a mobile app and also use it in the browser. And the way it works in the browser is that keys are stored in local storage and the data is stored in, if it's on Firefox, for example, it's indexed DB in Chrome. It's ChromeFS because it's a bit faster. And for the crypto stuff, we use Wacom, which is almost as fast as the C version. And for connecting to other peers, we use web sockets for rooms, for example. And if you look at the architecture of the SSB browser core, you can see that you have on the left side, we have the network aspect. And for this, we have the secret handshake, which is used. So you're sure that the other person you're connected to is exactly this identity. This is the same identity as the feed identity. And we have this mock surface here as well. And for the connections, we have SSB con and the rooms, then we have these sync with the Ibitita talked about earlier and the blobs and the feeds. And then we have the DB2. And then we have the feed on the right side there, where we have the keys and we have the validation that the feed is indeed correct. And this means that you can have a regular kind of social application running directly in the browser. As you can see, it has, you know, reaction likes and so on. You can write replies to messages and all of this stuff. This works now. And you can build like CRDT, this conflict free data replication or data types where you can do like shared markdown documents, for example, where two browsers in this example are writing in the same document. It has a little chat in the bottom way, consent, ephemeral messages between them. And one of the problems with putting SSB in a browser is that you get this URL centralization, which is the problem that the, well, of course you can run it from your own machine, but it would be nice that you can just go to some website and then you have SSB running there. And it does give the sort of same centralization as the federated universe where you are dependent on a server. So what if instead we can run the applications directly from SSB, meaning that the applications would be a part of the SSB network. One way to do this is that you can have, because you have the blob storage, you can actually store the actual JavaScript code that you need to run this as blobs. And then you can have messages as these updates channels. So you can just have one root message, which is the SSB browser demo, for example. And then you could do replies on this, which is the latest version of this, and then it's fully decentralized App Store, more or less. And you could have a small core where you can then put on this server somewhere. And then from this, you can then run applications and you can put in the sort of root message and the idea of the feed that you trust, and then you can run applications from your friends necessarily. And the way that it works or looks is that like this small demo here, where you, as you can see, you have the input box where you put in the hash of the message, and then you have the author as well. And then you download the app based on that, and then you can, of course, put in a different remote server, just a web socket. And from this, you have like any number of applications you want to run on this, completely independent of where the site is stored. Okay, so thank you for listening, and I hope this was interesting, and if you want to check out more of this, go to skalbad.nc.
|
Today computer systems are often built with an implicit hierarchy. It can be seen as a way to enforce existing power structures. The very act of making software entails describing exactly how the system can be used and for what. Furthermore, ever more data about the usage of systems is gathered. This combined with machine learning has given rise to a whole new class of systems that can be very hard to reason about. Especially given that the data or the algorithms can be controlled or bought by external parties. What if that doesn't need to be the case, what if we could make software that is both subjective and in control of the user. I will be presenting one such system - Scuttlebutt, detail how it is different from the systems described above, and also different from federated systems. In this particular talk, we delve into how Scuttlebutt apps can be built straight in the browser, no additional application needed. With the expert at hand there will be plenty of time to dive in and explore your own ideas once learning how to build your own SSB apps in the browser.
|
10.5446/52223 (DOI)
|
Hi, everyone. I'm happy you're here to see the talk. So let's dive straight in. So a few words about me. So I am a third year undergraduate student at Imperial College London. I got into Rust. Well, I initially found out about Rust a couple of years ago, but I got more into it earlier this year, and I decided to do a GSOC project working on adding language support in K-develop for Rust. So yeah, so a bit about the project. So the main sort of part of the project is a library that I did, which essentially kind of takes information from the Rust compiler, from lip syntax, and then presents it to K-develop where I hooked up things, and yeah, basically, you'll see a bit later like the few demos from K-develop. So why not use the Rust language server? So the Rust language server is pretty cool. It essentially summarizes all the important information from the Rust compiler, and it presents it through the language server protocol. And this is great for editors like Visual Studio Code and Cade, for example, but specifically for K-develop, I found that it might not be the best approach because essentially K-develop expects a lot of internal structures to be built, and the K-develop core code, as not the language plugin, but the core of K-develop does a lot of things for you. So things like semantic highlighting, renaming declarations, finding usages, that kind of thing, like that's all done for you by K-develop as long as the language plugin builds these internal data structures. So what is ASD Redux? So it's a self-contained library. It's somewhat similar to Lib Clang, if you've seen that, or Cpython's ASD library. It's a bit lower level than RLS, so with RLS you would ask the language server something like, oh, tell me all the declarations for this particular symbol, or tell me all the usages for this particular symbol, and RLS would give all those to you. Here it's more like, oh, here's a bunch of symbols, you figure out what to do. It provides a view of the abstract syntax tree of the code, and it sort of, it exposes this as a C API, and it hides the details of working with lip syntax. So yeah, at the moment it's using lip syntax as a sort of platform to build on. In the future, I'm looking into getting more information from the Rust compiler and exposing that in a meaningful way as well. So how does this interact with K-develop? So K-develop essentially expects these internal data structures to be built. These are called the declaration use chain. Essentially what happens is, from K-develop I pass to ASD redux, the source code that it should parse and produce an ASD for, it does its thing, and then K-develop goes through the ASD and populates its internal data structures. So yeah, let's go on and see some of the things that you can do in K-develop. So like I said, essentially what happens is you go through the ASD, for each node you figure out if it's something like a function, a struct, an input, or anything else. You build the corresponding data structure in K-develop, and you're done. You get a lot of things for free at that point. So stuff like semantic highlighting. That's basically a single function call, and voila, K-develop figures out the rest for you. So let there be color. So yeah, renaming declarations or finding usages, whatever you want, same thing. Everything is basically done for you. So let's see if this will work. Yeah. So renaming a declaration, that's also, it sort of just works out of the box. With regards to building and debugging straight from the ID, again, K-develop has support for GDB and LLDB. So it was essentially a matter of getting those hooked up to work with Rust executables. And this as well works straight out of the box. You can run your code directly from K-develop. So same thing. You can add break points. You can see all of the variables, all their current values, step through the program, et cetera, et cetera, all the regular things you would want to do when debugging. So, yeah. Code completion. So code completion currently works for local declarations. I'm currently working on getting it set up for the Rust standard library. So this is proving to be a bit more interesting because a lot of, like, in the standard library, a lot of things are implemented with the help of macros. That expanding macros is an interesting process. So yeah. But I'm trying to work around that by sort of doing what a lot of other language plugins do, which is exposing, like, building essentially expanding the full source code of the standard library and then K-develop can just parse that directly. Or, yeah. And in the future, I'm looking to do the same thing for other libraries. So cargo has this great thing, which is it downloads the libraries, it checks them out from Git, and you can essentially find all the metadata, all the information you need to where those libraries are checked out and how you can, like, where you can find the source code, so you can go through that, parse that, and yeah, I'm looking to do that in the future. And similarly for project management, at the moment, you can create projects straight from K-develop. I'm looking in the future to expand the support for cargo integration. So the metadata that's provided by cargo for dependencies, for example, that can be parsed and the ID can then figure out how to do code completion and stuff like that for any libraries you're using. So yeah, I slightly rushed through this presentation because, yeah, I figured that I had a bit less time. But yeah, so for the future, like I mentioned, there's stuff like, there's a lot of stuff left to be done. Basically I've tried to make the library part of this project, which is AST Redux, a bit more agnostic, so it's not that tightly, it's not that sort of tightly integrated with K-develop. It can be used in other projects as well. So there's a lot of work that can be done there. So things like building straight, like essentially using the Rust compiler to build and extract any errors that are found in the code and expose those. There can be integration with RLS in the future. So yeah, stuff like that, there's a lot that can be done in this regard. So yeah, I'd like to say thank you to the organizers of this conference, Carol especially for inviting me to give a talk. And everyone in the Rust community that was so helpful throughout my GSOC project and was giving me lots of feedback and my mentors at KDE as well. So thank you, thanks to everyone, and thank you all for listening. So with that, I just want to say if you want to try out K-develop, come and see me at some point today and I can explain how you can do that, because at the moment it's based on the current master, so building everything from scratch might be a bit not straightforward. So yeah, come and see me at any point, I'm happy to help. So with that, are there any questions? Yeah. Can you find the source for the ASCVDX? Yeah, it's on GitHub. I can give you the link. Yeah? I want to say first off, this looks really nice. The demo was very cool. And I think it's a great idea. So I want to say first off, this looks really nice, the demo was very cool. And I was wondering about two questions. One is, since it's based on LibSyntax, and I know we've been making a few changes to LibSyntax in the last few months, have you been able to keep it up to date? Are you trying to keep it up to date? Like if I wanted to use it, besides having to build the master, would that be possible? Yeah, so at the moment, I am trying to keep it up to date. And the way I've set it up so that I can figure out when something breaks is I have a daily build on Travis set up. So if anything breaks overnight, like I get an email instantly. So I've been trying to keep up with those. There haven't been that many of those so far. So yeah. And the last question, are you doing things like name resolution yourself in order to link up the local variables? So that's one thing that I'm trying to decide on how to do exactly, because KDevelop does name resolution for you. But obviously, if I wanted to be sort of this agnostic library that can be used in other projects, then I would have to do name resolution within the Rust library. So at the moment, I'm using KDevelop's features to do name resolution, but I'm looking into moving that over into ASU Redux. Were there any like surprises or challenges that you had that came up when you were working on top of those syntax that you want to share? Oh, yeah. Yeah, lip syntax, I'm guessing because it was sort of built at a point where Rust didn't really have the error handling features it does today. So lip syntax panics on quite a few things like unexpected tokens. So for example, if there's a backslash somewhere, then it just panics. So stuff like that, I had to try to work around that. So I do, at the moment, I do the same thing as the Rust compiler, which is I spin it up in a new thread and wait to see what happens, basically. I think that I quite appreciated that KDevelop does a lot of these things for you. So I've worked with KDevelop in the past. So I sort of knew about this. And that's one of my main motivations of choosing to do it this way rather than using RLS because the way that KDevelop sort of handles these things behind the scenes, like if I had decided to do it with RLS, then if I decided to work with RLS, I would essentially have had to reimplement all of those things, which because KDevelop is kind of based on, like, it's kind of, it works with its own data structures. And at the same time, it's based on KATE. So things like the syntax highlighting is actually implemented in KATE. And KDevelop only says, okay, well, for these ranges, like, do this sort of coloring or whatever. And yeah, like, basically, if I decided to use RLS, all that code would become obsolete. So everything would have to be reimplemented on top of the language server protocol. So I think that would have been quite a bit more work. Yeah? You mentioned having some sort of mentorship. How did you get in touch with those mentors? So the Google Summer of Code program basically works like you submit a proposal and if there's interest from the organization to do that, to do that project, then they find mentors for you. So I was working with Kevin Funk who works on the C++ plugin in KDevelop. And yeah, like, he basically helped me a bit through, like, getting this, like, set up and that kind of thing. Yeah? What would you be hoping to achieve by integrating RLS in KDevelop? I guess stability because RLS is sort of actively developed by the Rust community. It has a lot of support from the core Rust developers as well. So essentially if anything changes at any point, I expect that that change would also be reflected in RLS. So what I'm trying to do with the library is sort of to expose it to expose the AST at the moment as through a C API, which is useful for something like KDevelop. But in the future, it might also be useful to get, like, further information like analysis from the compiler and that kind of thing exposed as well. So, like, with RLS, I think it would be, it would make it more stable basically. Like, I wouldn't have to keep track of all these changes. Yeah? Would it be completely impossible to be able to port your library to RLS and operate over the language service or the way it should be? I don't think it would be impossible. I think that would make the RLS a lot more useful for KDevelop, for example. So yeah, like, at the moment, RLS sort of implements the language server protocol, which doesn't really have any options for exposing the abstract syntax tree, for example. There are other similar protocols. I was pointed to one over the summer, but I don't have it off the top of my head, which language had that. But there was a sort of similar language server type software which exposed more information about the structure of the code to the ID. So if RLS had features like that, then it would be very useful for integrating with KDevelop, I think. One more question for me anyway. You mentioned that KDevelop does things like name resolution. It seems to me that there's likely to be rules and rust that KDevelop doesn't know about. And so would it be hard? Let's assume the RLS would be extended arbitrarily. Would it be hard to, if you did get, say, name resolution information from it, can you feed that also to KDevelop and say, okay, you don't have to resolve names? I can tell you the answer right now. Or does it not have that ability? I think it should be possible to do that. My guess is at the moment, yes, it's possible, but I'm not 100% sure on that. Yeah? You have any sense across IDEs and editors, different platforms, where the graphical debugging support is? Is there lots available or only a few different IDEs? Sorry, can you repeat? So outside of KDevelop, other IDEs, editors, do you know which ones have support for graph debugging from the editor versus not? I'm not sure. I imagine IntelliJ would have that. I haven't tried it recently, so I'm not sure. But yeah, like, I don't know. Sorry. I can answer that. IntelliJ, you cannot in IntelliJ itself, if you're using C-Lion, you can. And you can in VS Code via its LLDV integration, and you can with some work in Adam via its GDB or LLDV integration. So if you're someone who's not me, you could probably speak to them. Not to my knowledge, but with this, they stand in my knowledge. Okay. Thank you. Thank you. Thank you.
|
Rust was voted “most loved” language by developers for the second year in a row in the Stack Overflow developer survey. There have been projects made using Rust on everything from operating systems to game engines for Minecraft-like games. Despite this, IDE support is still very limited. As my Google Summer of Code project, I worked on a Rust plug-in for the KDevelop IDE which aimed to support most standard IDE features such as semantic highlighting, code completion, project management and debugging. I will go through the challenges of integrating language support in an existing IDE and talk about how adding semantic highlighting was one line of code and getting debugging to work took less than 10 minutes.
|
10.5446/52224 (DOI)
|
Cool. So as Gene said, my name is Phil Freed and I'm a developer over at PAXata. And I actually started off not in software development, but I started studying painting and printmaking. So I come to this from a little bit of a different angle. So that's why I want to share some of that with you guys. It's something that's been pretty helpful to me, which is why I want to share it. So I want to start off right off the bat. I want to get this part out of the way. And that's just a basic working definition of creativity. And for our purposes, it's just the production of something that is innovative and useful. And I think it's important to point out that there's really both have to be there. If it's just novel, but not really useful for anything, what's the point? And if it's useful, but already somebody's already built it, then why build it again? So we really want both of those things. And this is important. All of these things that we use day to day, everything from the personal computer to the data structures and algorithms that we use, these are all creative productions. So we want, as developers, we want more of this. We can build upon our previous creative productions to get more. It's an awesome system. We look at some of these things in the slide, and awesome stuff built by amazing people. And this really points out one of the biggest sort of intuitive knowledges that we all bring to the table about creativity is we have this concept of creative people. So here we have Leonardo da Vinci and Mozart. And on the right is Grace Hopper. These are amazing folks that have built awesome, awesome stuff. And they're really, they're giants. And these people are all awesome, but they stand out in history because they're so rare. And the truth is that most creative productions aren't built by Leonardo da Vinci. Most of the stuff that we know and love, it's built by everybody here, right? Everybody can be creative. And you don't have to be da Vinci or Grace Hopper in order to build something that is both innovative and useful. And so what this talk is really about, it's about empowering everybody here. And I want to say that our view of creativity, I don't want to look at this fatalistically, right? Creativity is not something that's randomly doled out to a select few at birth. It's something that can be learned and nurtured and cultivated. And that's a really important idea. So to illustrate this, we're going to take a step back in history. All of you in this room, you're all going to come back in time with me. We're going to go back in time to the early dark ages in England. And we're going to talk about the history of the word creative. You know, a long time ago, we all of course live in an anarcho-syndicalist commune. And the creative is not even a word at this point in history. It's not a thing. It's not a concept at all. And if we're all back in this time period, so there's maybe about 100 people in this room, 95 of you are all just working morning till night on producing food. And what you do is you work morning till night and you produce barely enough food to survive. So we're all just kind of barely not dying. And what that means, since we have such a small surplus, so let's take an example, right? I'm Dennis over here and I want to try to innovate in how I do my farming. I'm going to try this new idea for how do I plant my crops. I try it out and it doesn't work. We're screwed. We're screwed. This is a big problem. So the consequences are dire. Now a bit later in history, right, we move forward a couple of hundred years and the situation looks a little bit better. I have a little bit more of a surplus. This affords me the opportunity to be a little bit more experimental. This is awesome. And turns out at this point in history is when we see the first usages, the first glimmers of the concept of creative. Creative is the first time creative is used as a word. So let's keep going. And we're now at the end of the 17th century and we've got a pretty good surplus. This affords us the opportunity for all sorts of experimentation. We're having all kinds of fun here. At this point we've got the Globe Theater, the Royal Academy for the Arts is just about to be founded. And the word creative at this point is now used to talk about the works of people. So back a couple hundred years before this, creative was just barely a word and it only referred to the works of the divine. It took some powerful juju. And now so-called great men can be creative reflecting sort of the sexism of the time. But today, today is totally different story. So now we use the term creative for all sorts of things. We use the term creative to refer to people. A kid does a drawing and you say, oh, that's very creative, right? I'll put it on the fridge. And we even expand the usage of that word to mean things that are not necessarily great. Right? And somebody says, oh, what do you think of my new tattoo? And you're like, oh, it's creative. And in fact, I mean, you could go to prison for creative accounting. And it turns out that these negative connotations are there for a very good reason. The word creative has these connotations because it's risky. Most attempts at creativity fail. So this graph, this is probably the most important thing in the whole talk. We're getting to it right off the bat. And that's because it points out the relationship between risk and creativity. When risk is high, creativity is low, and when risk is low, creativity gets high. So it's pretty easy to see why we consider the dark ages the dark ages, right? Because innovation was risky. People didn't want to be creative because of the whole death thing. Now another important point about this is I want to try to think about it in a slightly different way. The creativity spectrum. So here on the one side, we have a soldier who is soldiering, and we have George Clinton who is George Clinton. And I think it's important to point out that, OK, again, the risk thing, if the soldier tries something creative and it doesn't work out, the consequences are very, very dire. Even if George Clinton tries to write a song and fails, he just throws that bit away and nobody ever hears it. He writes another one. It's no big deal. It's like nothing. But the important thing to point out here is that this has nothing to do with them as people and everything to do with their pursuit. This soldier could be an immensely creative person back at home when he's doing something else. I don't know. And George Clinton, if he was off fighting some foreign war, might not be very creative about it. So the key point is really just that the pursuit is what determines whether or not we're being creative. And so as software developers, of course, I mean, all of you probably know what's coming next, right? We are all the way over here. Most of us. So, can't off. That picture just cracks me up every time. So we think, you know, we work in one of the most pliable mediums ever created, right, code, especially if you have VCS, what's the worst that can happen? Well, it's not quite that simple, is it? Everybody here ever write code for something that really just can't fail, right? People writing code that's running rockets and cars and assembly lines and things like that. So if we want to increase creativity, what we need to do is we need to talk about risk. And we need to be open about, have a good open discussion about the types of risk. Because it turns out risk is a little more nuanced than just like, oh, well, you live or you die, right? There's tons of different kinds of risk. So the obvious one is just functional. It's possible that my code could be buggy. There's also, most of us are writing code maybe for a job, right? So there's financial risk to business. This happens especially when you have an established product, right, an established company that's got a pretty good revenue coming in off of some core features. So it doesn't really make a ton of sense to try to take risks in innovating on these core features that represent a major revenue stream, right? You're going to be a little more cautious about that. You know, you could maybe take too long to deliver it. There's risk there, timing. So a great example of dealing with all these risks is what we just heard about, Stilo. If you think about the approach to that project, right, it was not just like, oh, well, let's open up the Firefox source tree and start hacking in there, right? And this is developed as a separate thing. That was for a lot of reasons, not but one of the major ones is it allowed those developers to be more creative. So apart from these things, right, this is kind of like, you know, institutional kind of, you know, things to talk about with risk. You know, this is a really important one. It's sort of lost a lot of times when we talk about risk. You know, for us individually, are there social consequences of failure? Right? So if I'm live coding up here, how creative am I going to be? It's probably not going to work, right? You guys want to sit and watch me, you know, fight compiler errors all the time or, you know, write bugs? This is an important thing. Thank you. This is an important thing to talk about, though, and this really reflects on, you know, our engineering culture, right? What do we say culturally about this concept of failure and what is our relationship, you know, as an institution with failure? And of course, personal. This one, again, you know, it can be tough to talk about. We don't really have a lot of language for that, but I think anybody here could understand the experience of, you know, trying something that doesn't work, right? Maybe the business doesn't care. Maybe nobody else in the room is really worried about it. But if I personally have a hard time with that failure, then it's going to be hard for me to take those risks. And this is another one, especially, you know, it gets often overlooked, you know, with, you know, groups of engineers, you know, we like to talk about the technical side and the technical risks. This is a huge, huge barrier to actual creativity. So, okay, we are engineers. We like to talk about the technical stuff. So let's do that, right? Let's talk about how we can reduce risk and code. There's a couple of pretty easy things that, you know, I'm sure are going to be fairly obvious to most people if you think about it a little bit, right? If I want to be more creative in my, say, a refactoring, if I have unit tests, that goes a long way to help that, right? I can, you know, do whatever I want. And as long as my test pass, if I'm confident, you know, this gives me the ability to be very creative. Version control, I don't think anybody here is probably, you know, like everybody is using that. But I want to call out that it's really important that we do. This allows us to be creative. Anybody have branches sitting around on their side projects that were just like, oh, I thought that was a good idea. I've got a million of them. And of course, you can write it in Rust. If I want to be creative about how I do something with regard to memory layout or parallelism, is Rust going to help me be more creative? Well it's going to reduce risk, isn't it? It absolutely will. This is a great thing that you can do. But it's also important, you know, to add a little bit of nuance to this. You know, this stuff, it applies, it depends on the scenario, right? So there's a lot of different ways that we can be innovative. And I have the Linux Penguin up here. Because Linux is, to me, is a great example of this. When people think about creativity, and one of these sort of, you know, things that we intuitively think about, you know, we think about things that are functionally brand new. And Linux is a great example because, you know, it started out, GDU Linux is just a clone of something that already existed, right? It wasn't bringing new functionality. What it was bringing in terms of innovation is it was extremely innovative on the process of development. It was extremely innovative in how it engaged with developers and engaged with users. And we've seen a ton of value with that. I don't think anybody would argue that, you know, that's really ultimately helped. And then being, so that they were able to build creative and innovative features because of that. So I don't think you can understate the importance of the relationship between risk and creativity. And I think, you know, everybody here being very smart, you know, when you go back into your life and you look at a task and you say, I want to be more creative in how I solve this, you know, I want this, I want the results to be awesome. You guys are all going to be able to think of ways that you can reduce risk. The solutions for that tend to be context dependent, but they also tend to be pretty easy to arrive at when you think about it. So moving on from the risk, I want to talk a little bit about creativity and some of the other things that kind of happen there. Right? How many of you guys, you know, you go to like a corporate training event or something and they say, we want to increase creativity. So everybody go into a room and paint pumpkins or something like that. You know, I think the risk thing is far more important. Straight up, if we want to increase creativity, first things we should do is talk about being able to reduce risk in all those different areas. However, there is some other important stuff there too. And so what I want to do is I want to talk about how creativity works in terms of two sort of generic phases. And we'll call it ideation and evaluation. So coming up with creative ideas and then deciding does this really have any merit. So I'm going to start by looking at just the ideation part. And you know, the Venn diagram here is really just a pretty simple illustration of exactly what happens when we do this. When I come up with ideas in my mind, what's happening is that I'm drawing from my breadth of experience and memories and I'm looking for overlaps. Maybe I'm working on, say I'm writing a garbage collector. But you know, in my past, I have experienced churning butter. And I come up with some crazy overlap between churning butter and garbage collecting. This is the kind of stuff that pops out. And we want these ideas. At this phase, you know, we're not really deciding whether it has merit or not. And it's important too to recognize that at this point we're also not, our ideas are not solutions. Our ideas may lead to solutions, but at this stage, you know, we're really just, we're looking for the overlaps and seeing, you know, it presents a path that we may choose to follow. Some people would say that you're generating like a mental representation of how you think about the problem. So we've got a number of things that we can do here to kind of help this out, right? We can indulge our curiosity. Everybody's here being enriched by this conference, learning about what other people are doing, even though it might not have anything to do with the specific task that you're working on. This is helping you be more creative. Diversity is a huge one. What's going to kill our creativity is if we came to a conference like this and it was all just people like us, right? Diversity is a huge thing. And this is great because, you know, it's not just at an institutional level, right, but also personal. So where am I spending my time? When we're talking about this ideation phase, what we really need to do if we want to be more creative is we need to cultivate this breadth of experience. I want to just shout out a quick word of warning here. I see a lot of you agree. Yeah. So this is because it doesn't work because these are not, this is not the kind of stuff that we can really internalize that enriches our breadth of experience. Just sitting next to somebody who's different than me doesn't enrich my breadth of experience if I don't engage with them, if I don't work with them and talk with them. So we can't just fake it, right? So this is ideation. We've come up with, you know, the overlap between butter churning and garbage collection and no, I have no idea how I thought of that as an example. Let's just say it was creative and it's maybe not the best. So after we come up with this idea, right, we have this structure, this representation of the problem and we're going to follow this path, we then evaluate it. And here it's, the story is very different, right? This is another reason why these like open floor plans and things like that don't work because here what we really want is we want a depth of knowledge, not a breadth, a breadth of knowledge isn't necessarily as helpful here. We want the depth of knowledge, we want critical thinking and we want focus. Most people at this stage of the game tend to be kind of quiet. One of these theories of creativity came about, the earliest ones, were just based on observations of people that others thought of as creative. People would see folks who they thought of as creative and they said, wow, I happen to notice that Tolkien takes a walk every morning and kind of gives himself time for some quiet contemplation. So evaluation what we're really doing is we're turning this over and deciding whether it's worth pursuing. Again we don't have a full solution at this point. We've only got this kind of weird amorphous overlap between multiple ideas and what happens is that this ends up circling back to this ideation phase. So this is a cyclic thing and one informs the other. So I come up with my overlap with the garbage collection and the butter churning and I turn it over in my mind for like five seconds maybe and I'm like, I don't think that's worth pursuing. And the important thing, maybe it is, you know what, somebody in this room is going to prove me wrong. Better GC is going to be the next big Java feature. I can feel it coming. But it cycles back to innovation and it changes our world view. So when I evaluate this idea and I say, you know what, that wasn't right, then it changes how I then think of future problems. So in this way it kind of builds on itself. I have a picture of blind justice here and it's really a bit misleading because the truth is at this evaluation step, this is not simple. This is not a simple decision where we say, oh, this is going to work or this is not going to work. Most of us as coders have been in the experience where somebody comes up and asks you, well, can you make this work and you think, well, yeah, I can get out the big hammer. We can get that square peg through a round hole if we push hard enough. And what this is really about is it's not whether or not it can technically be made to work. It's more about the principle of affordance. Anybody ever heard of the principle of affordance? So hugely important design principle. You guys are all going to understand it pretty quickly with this next image. So when we are talking about making something, we take this idea that we have and we imagine what it's going to look like. And if you look at the doors over here, if I'm designing a door handle, I would say that the door handle on the right affords pulling. When you look at it, you immediately think, oh, I could pull that. And you would try to pull it. And how many people probably try to pull on that door without ever reading the sign? It probably happens all the time. I do it. Whereas the one on the left, it's pretty obvious how it's supposed to be used. So this isn't quite what can it technically do, but what does my idea afford? And it's a much more nuanced decision. So let's talk about this in code. We're in code. I don't know about you guys. I tend to, I do most of my work with traits and unimplemented functions before I ever write any line of executable code. And the reason for that is what this looks like in terms of the creative process here is I'm taking the ideas, and they're in my mind, we're starting to solidify a general structure or a path. We're solidifying that into something that might be a solution. But what we want to do here is we want to figure out, is this really what I want? And what does it afford? I want to figure that out pretty quickly. Because if it doesn't afford the things that I want, if it's not quite right, I want to either scrap it or refine it as quickly as I can. So unimplemented macro, for me, this is like the most commonly used macro in my code. I use this everywhere. Unimplemented or just define a trait. Maybe it'll be a concrete type later, I don't know. Just define a trait. Don't worry about the implementation, but be able to see what my design is going to afford. So this clearly affords shooting the moon, which is something that I'd love to do. But you can also think about maybe Rust strings as an example. Rust strings afford slicing. This is a really useful thing. And so if you want to solve your problem with slices, if slices are going to be useful to you, then you want to give yourself an API that affords the kind of things that you want. Now, the other part of this is that, again, this needs to be innovative and useful in order to really be what we are after. And again, we're circling back to ideation. So I'm going to decide, well, this isn't quite right. I want to change it a little bit, refine it, go back and forth. To do that, I need to decide whether or not this is going to be considered a failure. And I use the term failure pretty lightly. In code, it is pretty light. Maybe most of this is fine, but it's just missing a semicolon, NBD. But maybe I just need to add a few functions, change few signatures, adjust things here and there. Either way, this requires me personally to say, this isn't right. And that could be hard. So failure, I think, it's an unavoidable part of the creative process. If we want to be creative, we have to get better at accepting failure. I'm going to tell you guys a real quick story from when I was studying art. I came in one day and I turned in a painting and it wasn't very good. And my art teacher, you could see it on my face. I came in and I didn't even want to show it to him. I was like, this is a turd. And he told me, look, you got to adjust your expectations. He said, I can see that you want to really be successful as an artist. And I'm going to tell you what that means. He said, what that means is that if you work very, very hard and you're very talented, that one in every seven of your paintings is going to be something that you actually feel good about. Think about that. That's huge. Six failures for every success. And this was words from a professional artist. And so this is like the key thing that I think we can learn as developers from creative fields is that I had to learn how to fail. And this is a huge, huge thing for me in being a developer. I learned how to fail. So failure is not a big deal. That allows me to cull those ideas pretty quickly. I can be ruthless in evaluating my own ideas because the risk to me isn't as great. There's a great book about this called Art and Fear. And if you're interested in the topic, I definitely recommend it. It's real short. But it talks about that aspect of how do I deal with my own creative failures? And if it helps, you can always cross out art and write software. So that's all I have. I hope for you guys that this was helpful and that you feel empowered to go out and be more creative. And if there's a question or anything that you have, come find me during lunch or afterwards. But thanks very much.
|
Creativity as a concept is not generally well understood, and that’s especially true as it applies to programming. Creativity can be either invaluable or dangerous, and sometimes it’s both. By understanding creativity, you’ll be able to leverage it to build awesome software. In this talk, we’ll explore what it means to be creative and how it relates to programming, and especially to Rust. Expect to come away with some tips for how to let your creativity flourish.
|
10.5446/52205 (DOI)
|
All right. Hey everybody, I'm Steve. Thanks for coming to my talk. This talk is titled Should We Have A 2021 Edition? And today is the 27th. So for those of you who don't know, I am on the core team of Rust. I previously led the documentation team and I wrote the Rust programming language, which is the introductory book on Rust that comes with Rust itself. Two other things, though, that you may or may not know, I figured I would mention before getting to the meat of this talk. I recently got a new job at a company called Oxide Computer Company. We're building new server computers for people to use. And we're doing like everything in Rust. So these days, I'm like writing embedded Rust and it's awesome. And we've had an embedded working group working on it for a while. And I appreciated the work in the abstract sense. But now that I'm actually doing the work as my job, it's fantastic. So that's pretty cool. And we are also hiring. So if you are interested in a job where you will write a lot of Rust, you may want to check that out. Finally, I have started actually working on open source Rust streaming stuff. So I'm doing my Twitch now. So if you want to watch me program on Tuesdays, that's the thing I'm doing. Okay. Anyway, enough about me. You want to hear about additions. So today, we're going to kind of cover these like three major points in this order. So the first one is what exactly are additions? I want to make sure that we're on the same page about the details because the details do actually matter and matter a lot here. Secondly, we've already done sort of one edition release, which is Rust 2018. And so I kind of want to take a look at that and like kind of how it went and some like thoughts, you know, after some time has passed. So it's kind of like a little bit of a case study, maybe a little bit of retrospective, you know, let's like talk about Rust 2018. And then finally, like, what should we actually do? And this is phrased as should we have Rust 2021? That's kind of like a big thing people have been talking about lately. I want to sort of emphasize, especially with both of these second sections, that this is like my personal opinion, while I am on the core team, I am only one person. And, you know, this is all phrased as like, should we because technically nothing has been decided. So while I do have my own opinions on this, feel free to disagree and just like, you know, this is this is me saying this. So take that as far as that goes. All right, so first up this first section. What even are additions anyway? If you don't know at all, thank you for coming my talk, even though you don't know the subject. This is exactly why I included this section. But even if you do, there's some details that maybe some people don't always think about. So I want to talk about a lot of those details. It's kind of two aspects to additions that I think are really important. And the first one is the sort of social aspect of Rust. So in addition is kind of this time point in time where we say, hey, Rust is significantly different now that it was in the past. And, you know, here's kind of like why. So Rust 1.0 came out in 2015. Rust 2018 was the first edition sort of release that came out in 2018. And we sort of said like, hey, you know, Rust has changed a lot in the last three years. You know, let's talk about all the stuff we've accomplished and add in some other things. And so that's like largely a social thing. So this is a way to sort of reflect on the longer term progress of the language and the project as a whole. Because we release every six weeks, that is like a really great thing for a lot of reasons. But it's really hard to like remember how much work we've done. Releases come out all the time and they're in small little chunks. And so it can be really, really easy to forget like just how far long things came. Half the time I forget that it's been like almost five years since Rust 1.0. Because when things are happening at such a rapid pace, it's really easy to, you know, lose track of like how far we've come. And so that's like really important. A second one is a point on like why additions matter are that there are a way to get new users into Rust. So this sort of is kind of similar to the first point, but a little bit different. Basically, you know, because we release every six weeks, there's a lot of people who don't pay attention to Rust releases because they see them so often and they have not that much in them. And so they're kind of like, you know, hey, you know, I, a new Rust release happens, I don't care. With the languages that release like once a year or once every couple of years, it's a really big deal. And so a lot of people will like hear about like, you know, oh, there's a new version of C++ out or there's a new version of Ruby out or the new C sharp has these cool new things in it. And that's a way for people who don't currently program in the language to sort of like it signals to them, hey, you know, I should check this out. Maybe, you know, I looked at C sharp 3.0, and I didn't really like what was going on. So I didn't use it. Maybe C sharp four, like time for me to check in again, maybe I want to start programming in it. And so it's kind of a nice way to signal to people outside of the Rust world as well, that you know, hey, a lot has gone on, and maybe if you didn't use Rust in the past, you would want to use it again. And then finally, like kind of for actually like development of Rust, sort of like a nice like rallying cry, I guess, basically, like, you know, not only is about reflecting on what we've done, but also if there's sort of big things we want to do, it's nice to build a point in the near future and say, like, you know, hey, it's 2020. Now, we want to do a Rust 2021. And we want to like get people excited. So let's start working on some projects that will like really generate that excitement. And there's some like back and forth here. And we're going to talk about all those details in a minute. But it can be a really great way to sort of get everybody excited about the future of Rust. Because, you know, we like need those check in kind of points, I think. But also additions are not purely a tech social mechanism, they're also a technical mechanism. And this this really matters. And so on a technical level, additions are a way to kind of make breaking changes to the Rust language, without actually making breaking changes. And so how this kind of works on a technical level is that additions are opt in. So you say what addition your code is in. And if you don't update that, then you don't update, you know, stuff. So if new things are added in a way that's like technically breaking, and it doesn't break you, because if you didn't opt in to the new changes, then your code still works forever. So this is like very different than, you know, Python two to Python three, for example, where, you know, you can't run your code kind of like together. And, you know, similar kind of things like that. So we can like make it a way to opt in to breaking changes. And that's really, really important for compatibility reasons. And then finally, additions are not allowed to change everything about Rust. This is a technical problem as much as there's a social problem. And there's some interplay back and forth between the two. But basically, like, major, major changes are not actually allowed. So some kinds of breaking changes are fine. But there are some kinds of changes that cannot be made in the addition mechanism. And that's due to the technical details of how all of this works. It does also kind of a social thing in the sense that it's also useful for humans, like, if tomorrow, like say Rust 2021 was going to be like garbage collected and use significant white spaces that occurly races, it would effectively be a whole different language. And that would make it a really big challenge for people to update to. But because we can only make certain kinds of breaking changes in additions, it's significantly better for people to be able to upgrade, not only because the opt in nature of things, but also just like Rust is still going to be Rust, even if there are some tweaks out, it works. And some changes like the core idea of Rust will be the same. So let's talk a little bit about what I mean by breaking changes in an addition. So I think one of the best examples of a breaking change was the async keyword. So Rust has, you know, as I said, two additions right now, 2018 and 2018. And in Rust 2015, code is, you know, async is not a keyword, but in 2018 it is. And so what that means is, if you look at this code that I have here on the left, we have a function named async that just takes an integer and returns it. It doesn't do anything fancy. But we call it passing into five and use underscore s as the name since we never use the variable to get rid of that kind of warning. So if I run this code on Rust 2018, as you can see, this is the playpen interface on play.rustlang.org. You can choose this little drop down with the three dots and then pick which addition you're in. So if you choose Rust 2015, this code will compile and it runs, doesn't do anything because we just, you know, pass an integer around. So there's no output. But if we change that to addition 2018, we will get an error. And this will say, expected identifier found keyword async. So we're allowed to name a function async in Rust 2015 because it's not a keyword, but we're not allowed to name it in async in Rust 2018 because it is a keyword. So this is an example of, you know, if we purely made this change in the language, code that existed and ran just fine before would end up breaking. So this is like a breaking change. But because you can choose whether or not to upgrade, it's like not a breaking change. And so this kind of like duality, it was a very controversial around coming up with the plan of additions. And it was also one of the big challenges of communicating this to users, like technically it's breaking, but it's also not breaking is very interesting and kind of unusual. Is my mic still working? All right, cool. Dropped out there for a second. So the way that you opt into this change, if you're not on the playpen, because most serious Rust programs are not running in the playpen, that would be ridiculous, is in the cargo.toml. So you get to choose an addition key by putting 2018 in there. And if this does not exist at all, then it defaults to 2015. So all of the code that was generated before we had this idea of additions, it is able to, you know, stay on the sort of the default 2015 edition. But if you opt in, then you're able to do this. And cargo new will start generating, you know, 2018. So I just typed cargo new to generate this, and it defaults me to the latest. And so that way, you know, we're able to like new projects start in the latest edition, but older projects stay on the addition they were created with until their creators explicitly opt in. So that's kind of like the way that this happens in most real projects. And one thing that's really important, though, about additions is that additions can interoperate just fine. So let's say that I had packaged up my useless async function into a crate, and I wanted to use it from some code that was in Rust 2018. Well, one of the features that was added in Rust 2018 was the ability to use a raw keyword. So I showed you the error message in the 2018 code before. Right below the error is a little help message that says, hey, you can escape keywords to use them as identifiers. And so here with this r pound sign, both at the call site and at the actual declaration site, you're able to still define and call a function named async, even though it's a keyword. So this would matter if you could imagine that the async function lived in a 2015 crate, then I would still be able to call it from a 2018 crate. And, you know, vice versa, if there was some other way of doing it. So this is one example of the ways that code can interoperate. But just in general, like kind of the idea is that you don't have to be worried about your dependencies at all. Other code is allowed to be in any addition, and they'll compile into your project just fine. And there's no like worrying about this kind of interoperability compatibility, other than if, you know, some keywords are named a certain way, and then you have to know the escape mechanism. But what this means is we don't have the situation whereby when a new addition comes out, everyone is forced to upgrade all at once, because that doesn't happen. It takes a long time. And, you know, there's always some people that are going to sort of prefer older things, and we don't want to like bifurcate the ecosystem. And so this way, these versions can live in total harmony, and your dependencies can upgrade at their leisure, and you can, you know, upgrade as fast or as slow as you'd like, and you're not locked out of the rest of us world. So that's a really like helpful way to make sure that all this operates smoothly. And so specifically, I'd mentioned before, part of the way that this works is that additions are not allowed to change everything. So these are the rules that were set out in the original RFC, and they're not complete. So this is not necessarily like a full list of everything that can and can't change. But I wanted to like point out some specifics. So an example of a thing that can change, as I've already showed you, is new keywords. So we're allowed to define new keywords and new additions. And, you know, that's totally fine. It's like I said, they interrupt across the boundary, no problems. The second thing that additions are allowed to change is allowed to repurpose existing syntax. And I say repurpose because like, you can remove existing syntax, it's like how it gets repurposed. But like, it generally shouldn't only remove, it should like replace with something that does something slightly different. So an example of this is, if you have a usage of a trait, in this case, I'm just calling it trait. But, you know, if you're using this as a trait object, you would just use the name of the trait originally. So you'd have like box trait or arc trait, something like that. And so what we did in the 2018 edition was we sort of deprecated that usage of just trait on its own. And we introduced dine trait. Unfortunately, my slides kind of put this on the two lines, normally it would be just one line. But like, this helped you know that hey, this is a trait object because we're doing dynamic dispatch. And so we kind of like replace that existing syntax for doing a thing with a new syntax instead. And so you're able to, you know, use this new syntax, and it's like more clear to people that dynamic dispatch is what's happening. So we're allowed to kind of like, change some existing stuff and add some new things and like tweak things. Another example of something that changed in the 2018 edition was the module system, which is a pretty big change. I'll talk about all the changes we made a little bit later. But there's also some things that we can't do. Earlier I mentioned that we couldn't make really big sweeping changes. But like there's actually slightly more specific things. Some of them are based on practical limitations, and some of them are based on decisions that we've made. So for example, the coherence rules can't change across editions. If you're not familiar, a coherence rule says like if a type implements a trait, is it allowed to implement that trait or not? Like do you get a compiler error when you try to compile a trait implementation for a type? So, you know, a common example of this rule being broken is that if I had say, a type that's defined in the standard library like string, and I have a type that's defined in a third party package is not mine, say maybe like serde, I can't define serde serialize for a string because it's not my trait, it's not my type. So that doesn't work. If I had made my own string type, then I'd be allowed to implement serde's trait for it. And if I implemented my own trait, I'd be allowed to implement it on the standard library string. But I have to own either one of the traits or the types. And the rules are a little more complicated than that. But like that's the gist of it. That's the biggest thing. So that's not allowed to change. We couldn't say relax them in one addition, but not in the other. And that's because cloh coherence rules are kind of global. And those rules happen across different crates. So if we had different rules for different parts of your program, that would be extremely hard to implement, very confusing, and possibly just unsound, like, it's not kind of a thing that you're able to do for all of those sort of reasons. So, coherence rules have to act globally. And so therefore they kind of have to stay the same in every addition no matter what. So that's a trick. Secondly, another thing that people sort of don't always appreciate is that we can't make breaking changes to the standard library. And intuitively, you're like, wait, if I can change crates, standard libraries just create, like why not? Well, I mean, it is just create, but it's also kind of not just create. You get one copy of the standard library for your whole program. And so if you had a dependency that used Rust 2018, and then you use Rust 2015, say, then you would need both copies. So it doesn't like really work that way exactly. So we're able to sort of deprecate things, but we're not able to remove them. And there's some talk of maybe having some sort of visibility situation where like, they exist, but there's some visibility rules where this is like only allowed to be seen in 2018, or 2015. Those are proposals and they're not like real yet. So, you know, this is like kind of half a true technical limitation and half a sort of social limitation. Somebody asked a question on Twitch. How does a deprecated feature move from deprecated to like removed? And can that be done between the two versions? So one interesting thing about this is that some of this stuff is kind of up in the air policy wise ish. Basically, like things cannot actually error on the same edition. So like usually the way you think about it is something that comes a warning in the old edition, and then an error in the new edition. And the original RFC talks a little bit about code that's being free on the first edition should compile. Maybe you'll have new warnings, but it won't break on the second edition. But like if there's warnings in your first edition code, maybe the second edition would break. I'm going to talk a little bit more about this at the end, because the exact policy is like a thing that we're talking about. So yeah, the rules are slightly up in the air, but originally that's what the intention was, was that like, you introduce warnings in the first edition, you remove them in the second edition. There's some questions about whether or not that's like too fast, or maybe we should like require a whole edition to go by. Like we require language features to wait for a whole release in nightly before we're able to stabilize them, for example, there's kind of a mandatory minimum waiting period. So maybe there would be a good mandatory waiting period for deprecations. But yeah, so I hope that answers enough of your question. Hey, Jared. But I'll talk about a little more. Okay, so why do we have these restrictions, like on some level, like I talked a little bit about some of them with like there's one crate everywhere. But let's get into some details. And I think this matters because this also really helps you understand like why certain things are allowed and why certain things aren't allowed. And it's also just kind of fun to talk about the compiler and how it works. So this next section is going to be about additions, I swear, but it's also kind of how the compiler works because I think that's interesting and kind of matters. So Rust is currently sort of kind of there's an asterisk here, I'm going to get to it later, what's called a multi pass compiler. And so this is like a classic architecture for compilers. If you take a compiler class in your university, this is all the teach you compilers work. A lot of compilers in the world are implemented this way. Basically what happens is there are this concept of passes. So you can you take in a source code, and then you spit out something. A lot of older compilers are called one pass compilers because they directly turn the code into the actual like binary code. So if you think about like used to have to be a lot of older languages required you to define a variable at the start of functions, for example, that was because they had one pass compilers. And so they needed to omit the stack space for those variables. Okay, it seems to be back cool. I don't know why that's happening. It's probably something on my end. But anyway, so yeah, so older compilers are one pass. That's also why a lot of them were super fast, because they didn't do a lot of this stuff. But over time, you know, we needed things to be more complicated. And so then people develop multi pass compilers where you would do multiple steps. So you'd iterate over the source code in multiple ways. And that's how it's produced. So in the Rust compiler, it's like multi pass architecture is traditionally existed. And is similar to many other compilers, basically of input being source code. And it kind of goes through each of these steps in turn. So the first one takes the source code, and it creates an AST out of it, which is abstract syntax tree. And then it takes that AST, and it produces here, which is high level IR. And it takes the high level or IR, and it produces mirror, which is the mid level IR. And it takes the mid level IR, and it produces MLVM IR, and then LVM takes that and produces the final binary. So there's a bunch of these steps. And within these steps, there's kind of smaller steps. And all this sort of like happens, we're going to talk about this in a little more detail. So compilers traditionally are kind of produced in this three different phases. That's why you have these passes, you do the full pass and the full pass and the full pass, all three steps. First one is like a lexical or syntactic analysis. So is your code well formed? This is like grammar rules. So, you know, does the sentence I'm saying follow proper grammar or not? Does your program follow the language as grammar or not? Secondly, is semantic analysis, which does this code make sense? So for example, you know, I could, I could say like, you know, this sentence is false. And that sentence is grammatically correct. So it passes lexical or syntactic analysis. But semantically, it's very unclear what it means. Because if it's true, then it's saying it's false, which means it's false, which means it's not true. So some reflexivity there. It's just the first example I could come up with. But like, you know, you can imagine gibberish, for example, that uses all real words, maybe it's structurally correct, but it doesn't actually make any sense. So semantic analysis makes sure that the thing that you've said is sensible. And then finally, code generation, this does not really have an analog in existing languages. I guess this is the vocal chords turning it into sound. Maybe it would be the analogy, stretching this a little too far. But once we verify that everything works and makes sense, we actually produce the binary, finally out of it. This step in itself has a bunch of different passes. For example, we talk about optimization passes, and sort of run and generate code, you know, faster. So that all happens in this kind of stage. But you know, these are kind of the three big giant steps. If you've ever wondered how Cargo check works, for example, cargo check will run the lexical and syntactic analysis and the semantic analysis, but won't generate any code. So this is an example of how understanding this can help you practically as a rest developer, use cargo check to check your code makes sense. But you don't actually want to run it. You can save yourself a lot of time by not making the compiler do code generation. So this is like one example of how this architecture lets you do less sometimes. So it's kind of like two sorts of these steps between these situations. And this is kind of some compiler jargon that you may be interested in. So one of them is called lowering. And so that's the word that you use when you talk about going from one form to the other. So for example, mere is lowered into L of the MIR, or the AST is lowered into here. And the reason it's called lowering is that at every step along the way, things get simpler. And we throw away things that we've already validated, which makes future steps easier. So for example, MIR does not have the concept of complicated fancy loops. So like for loops, for example, don't exist in MIR. What happens is the previous steps in the compiler take your for loop and they turn it into a while loop with a break in it. And so MIR is able to understand a while loop with a break. So we've sort of removed a construct from the language when we've gotten to the lower step. And that is why the rest of step is simpler. And this definitely really matters. And that's why it's called lowering is because you're kind of breaking down what is happening into simpler and simpler things. And then finally is a step called a pass, which basically means a sort of check that validates that your program is well formed. It does not necessarily do a transformation, although technically it can. So for example, type checking is a pass on your code. So it runs over your code and it makes sure everything makes sense. A lot of things are kind of passes. And sometimes they will do transformations. This is like whenever basically like, okay, you know, I'm going to take your for loop and I'm going to rewrite it as a while loop. That happens first as a pass, because inside of here, I believe. And then the lowering step is what turns that simpler form of here into MIR. So they kind of like work together. Okay, so this is some example of actual code going through these steps. And I did this about a year ago. So the samples are a little dated because of the fact that the compiler output changes all the time. But conceptually, it's the same. So I figured it being outdated actually a little better because you shouldn't get hung up on the specifics. So, for example, let's talk about taking code and producing an AST. If we have this function called plus one takes 932 adds one to it returns it. We can actually ask the compiler to print out the JSON version of the AST. Because again, these are all data structures inside the compiler. So they don't really have a text representation, but you can like print them out and things like JSON. So for example, plus one, a little bit of it looks like this. So there's a statements, and that's an array of nodes. And inside of there, each node has a variance. So this is like an expression. And then, you know, it's like a binary expression that's add, you can kind of see how these little bits all fit together. And then, you know, we're adding X, and it keeps going on to talk about adding one and stuff. So you kind of get this data structure view of your code. And that's what Rust AST looks like. So here, you know, we take our statements and the statements refers to an expression. Expression refers to a binary expression. The binary expression says, hey, we have an add expression that adds X and one together. So this is kind of like why it's called a tree is because if you see there's like this root, which is the statements and then there's the branches and leaves that happen to build a tree. So this is like what an AST kind of looks like visually. And so, yeah, fundamentally, the AST is a data structure. And it's the way that our code looks written in words, but is a data structure. This means it's easier to manipulate. Like, you know, if you just have a data structure, you can just manipulate it like that's what they're there for. But if you had to do it on the like textual representation of your code, it would be much harder. So the idea is that we break the text down into a data structure and then we do all our operations on those data structures. So from the AST, we move on to the here. And so here is short for a high level intermediate representation. And this basically is like doing form this sort of checks. What I mean by form this is like, you know, have you imported all the stuff that you've used things like that. And some things are simplified. So for example, in my understanding here does not have, you know, like use statements, they get turned into the elaborated like full versions of all the types. So an example of what, you know, kind of like the sort of transformations that happens here. This is a very simple for loop. There's a reason I reference for loop several times earlier today, where we take a vector of five integers and we loop over it, printing them all out. So the AST would take that literal code is written and represent it that way. But when it gets lowered into here, it ends up being something more like this. So this is like still rust code that you could write in theory. But you'll notice the for loop is totally gone. And we now have a loop with a match statement inside. And when we turn the thing that we're iterating over into an iterator, and we, you know, call next on it repeatedly, until the body of the loop happens and all these things. So you can see how it's kind of like simpler in the sense that there are less language constructs. It's more complicated in the sense that there's more code. Because like that's the reason we write in the higher level stuff in the first place is because it's easier to understand for humans. But the computer, you know, having less things makes the analysis much simpler. So we do that kind of stuff. Most checks in the compiler today are done on here, at least in my understanding. So here was the original Rust IR that existed. So the first version of the compiler, or maybe first is a little strong, but like for a very long time, the compiler turned everything into here. And then it went to LVM IR from there. So, you know, it's kind of like the og thing. So a lot of things are written in terms of here. So two things that are still written in type check, or in here are type check, which is like, do all the types make sense and have you not made any type errors. And then finally method lookup. So that's done at compile time instead of in some dynamic languages method lookup is like a dynamic process. But basically, you know, kind of look at like, you know, what, what trade are you actually calling when you call a method or is there a trade is an inherent method, those kind of things. These are all done on here. Then we move from here to mirror, and mirror became the subject of a lot of discussion in the Rust world. So you may have heard about it over the last couple years. Mir is ultimately about control flow. So here is kind of represented our code in the way that we wrote it. But mirror totally rewrites it to be in a simpler form, but also one that's based on a control flow graph is the term, rather than an AST, which is the tree. So going from a tree to a graph is, you know, helpful for certain kinds of analyses, specifically like the graph represents the way that the control goes through your programs like which statement executes in which order. And that matters because, for example, non lexical lifetimes, it needs to know how the execution flows within your program to work. And so it was very difficult to write that pass on here. So like this is a practical example of why this stuff matters is that we had to invent mirror to make non lexical lifetimes feasible, because we needed to be able to encode that flow control to be able to make it actually happen. And I say we, because I'm on the team, but like I did none of this work to be absolutely clear about it. A lot of other really great people made it happen. I don't know what's up with my audio dropping. Anyway, so another interesting thing about mirror beyond the control flow move is it's kind of the core of rust. So it's kind of like everything that's rusty about rust without any superfluous extra concepts. So like I said earlier, fancy loops are gone. Everything is purely loops and breaks. I think I might even have go tos correctly. But Baro checking is done here because, like I said, non lexical lifetimes, kind of mirror is kind of like the core way that, you know, what makes rust rust. And it's kind of like a representation of the computer of what rust is like. And as example of what mirror looks like, if you dump out an example, again, this is a little old, and this like pseudo rust is not actually rust code. But it kind of looks like it again, we're printing a data structure that doesn't have a real text representation. So here you can see our add one function, it declares two variables zero and two, or one is the first parameter. And in a BB zero stands for basic block. So the pay attention to control flow, we have this idea blocks. And so it'll say, hey, storage live means that the second variable is live in this area, which is an analysis happens about control flow. It's too complicated for me to get into and I'm too far in the weeds already here. You can read the docs in the compiler if you want to see all what's happening. You can see our call to add, you know, adds a constant one and our number two, and then declares the storage is dead and then returns. This kind of what mirror looks like again, not anything you ever have to worry about as a rest programmer but if you want to see all the compiler sees your code, looking at mirror can be really useful. And then finally, miraget's lowered LLVM IR LLVM is a compiler toolkit that you can use to kind of like build stuff. It's a VM and like the technical sense of VM but not on the way that everyone uses the word VM so you stand for low level virtual machine but the LLVM project changed its name so that it no longer me references virtual machines because that got too confusing to many people. So basically, what we do is optimizations and code generation are done by LLVM for the most part. There are a couple optimizations that happen in mirror, and we hope to do more of them in the future, but Russ would not be anywhere where it is today without LLVM so it's very important to us. And it kind of like is the lowest level of the compiler sort of operates, you know, LLVM takes the compiler's output and produces the binary so we kind of like handed off to that library is the last couple steps. Andrew Livret, sorry if I mispronounced your name, is asking what is the timeline if there's one to get SIMD support unstable. So SIMD is actually already unstable for Rust, but only the X86 versions so it's also the low level unsafe primitives. So I think maybe you either one don't know that's true which is totally fine but that does exist today, or two you're talking about higher level SIMD which is like what you would want to write as a regular programmer instead of the intrinsics. I don't actually know that there's a group working on the higher level stuff right now. The LLVM stuff does work though on X86 at least, and given arms recent interest in Rust, I'm assuming that arm assembly stuff will be soon to follow. I think maybe some of it already works, but I'm not actually 100% sure. So yeah, still more work to do there. Don't know the exact timeline yet. Question by jam one Garner. What would go to be used for in mere just more complicated breaks. Basically like structured programming is useful as the programmer like we write loops because we don't want to have to think it go tos, but go tos are conceptually simpler because they are allowed to do whatever. And so it's actually easier for the compiler to understand than the higher level constructs. So I actually like I want to emphasize like I don't work on the compiler myself so I believe that go tos exist but I don't actually 100% remember. So I might be a little wrong there. So you should you know if you care about this topic should look into details a little more, but I believe that that's the case. But I'm not 100% sure. Finally, there's a question about working with Rust and fuchsia I will talk about that towards the end because it's not relevant to this part of the conversation so I'll get back to you SpaceX Jedi don't worry about it. Thank you for the question. Okay, if you're curious what what LVM IR looks like. This is an example of the text version of LVM IR so you can see that the functions name is mangled so it's add one but with a bunch of other shenanigans on top of it. And, you know, we add a number to that and return it very straightforward. There's a whole bunch of attributes and other stuff. So this is kind of like what we hand off to LVM. It does optimization passes there shouldn't be a lot of optimizing to do in this code other than maybe inlining it into other code we've written somewhere else. So, you know, all that kind of thing. Okay, the last thing I need to mention on the compiler architecture before we go back to how this manage it how this works with additions I hope you'll forgive my little compiler tutorial here is that I said before we were a multi pass compiler and that's true but we're also like working on making Rust query based so a lot of the compiler is already query based. And what that means is instead of these kind of passes where you take the whole source code and you turn it into one AST, you take the whole AST turn it into here, vice versa, like you're continuing, you know, down that strategy. Rust instead uses this concept called a query to create an executable. So what happens instead is the rest compiler will say something like hey what type is this function. What's the body of this function. And then the compiler will operate in that fashion instead so instead of it being like here's the source code of your program and turn it all into one way. The compiler will say like give me the you know what where's this function and then the internals the compiler will say oh, we don't have this function yet, but you know, let's load up the source code of what where we you know think it is, and then we'll look at the body and do all that work, and then sort of figure out that way. And so it'll sort of do all the steps, but in smaller chunks instead. And the reason this is useful is first of all memoization. So you're able to, you know, reuse results of these queries across different invocations of things. And so that's helpful for compiler speed, but also more importantly it fits increment incremental, incremental compilation can say that incremental compilation it fits it much better. And so, you know, for able to just say like, Oh, the body of this one function changed that would map directly to one compiler query of saying get me the body of this function. And so then you would only do that part of the work, instead of saying like okay, you change the body of this function now we have to redo the whole AST and the whole higher level I R and the whole middle level I R. And so this is kind of the way that sort of production grade compilers are written today, rather than the way that they're taught in the middle. This was started by C sharp with their Roslin compiler so if you're more interested in kind of how this works should look into Roslin or rusty as we continue to make it happen this way. All the mere stuff is written like this in my understanding, some of the older code is not yet converted but this would kind of be how the stuff sort of works. Also if you've been following rust analyzer lately. So this is kind of the way that it has this sort of model of like highly incremental and like not traditional architecture that compilers have so that's all how that goes. Okay, so additions aren't allowed to break everything but like how does that mean, like what's that mean. So for compilers, basically like additions aren't allowed to differ by the time you get to mirror. So, you know what that means is a little fuzzy from the outside, basically what it means is the core of rust mere stays the same no matter what. And that really matters for a number of different reasons. The first thing is that like, because it becomes a common language for all the additions. It's much easier for the compiler team to sort of manage changes that are brought on by additions. So all of the stuff that happens differences and additions happen earlier stages than mere. So what that means is we can assume no matter what going forward that mere is like relatively stable obviously the changing the interface but I mean like, we don't have to switch on things by the time you get to that step and so everything is sorted out earlier. And this happens to make interoperating between additions easier because you know your 2015 code and my 2018 code will both compile into the same mirror at the end so that's how we guarantee interoperability because they're both speaking the same language at the, at the, you know, bottom of it. Because this is the primary mechanism by which things become interoperable. And it's also the way in which we can like control the amount of breaking changes because if you need something that would change things like very fundamentally, then we can't do it because we need to have this sort of interoperability layer. On a human side, not being able to break things in mere means that things can change per addition but not that much because like the core understanding of rust and what it is is going to be the same no matter what the sort of like high level details are. And so that's really important. It also means because we are compiling down at the same thing we have this interoperability when else but the ecosystem and that really matters. Hey Jared has another question. Would adding more query based compiler features to rust see make rust analyzer and other sort of analyses tools more robust. Yeah, basically that's exactly like the reason that C sharp underwent the Roslin project is IDEs have very different needs for languages than the traditional compilers do. And so they kind of like reoriented around what happens in an IDE. That's exactly why I phrased it is kind of like, oh I changed the body this function let's not recompile the entire world, because that's what happens when you're actually in your editor programming as you change little bits. Most of the programs program stays the same. So we kind of want to reuse that work, rather than changing, you know, throwing it all away every single time. And so yeah, it's definitely one of the reasons why those tools are better is because they want this kind of architecture. So yeah. Okay. And then another thing about like what is additions towards the end, you kind of think of additions as sort of like a bigger release cycle. So rest already has three different release cycles are stable beta and nightly, and there's different cadences to those releases so nightly is every night stable and beta happen every six. Okay, so, you know, additions kind of like our like a bigger release cycle that happens, you know, broader than versions. But we don't have a cadence for additions yet. This is kind of why this talk exists. And we're working on our FC that I'll get to here at the end, but basically like, we didn't actually decide whether or not this would happen on a schedule or not when we decided to do it in 2018. So there's no policy in the initial RFC about when additions should be used, as I sort of just mentioned. It didn't talk about when it just mostly was talking about how. And that way we could focus on shipping rest 2018 and not worry about it until later because, you know, there's a lot of stuff going on. And we want to set that policy immediately we wanted some experience with additions, we wanted to get it 2018 going and we don't want to think about it basically. So that's been a couple years time to start thinking about it. So 2018 was sort of the first edition but we kind of retconned it to be the second one so that's 1.0 and that's 2015 where the very first edition sort of speaking. 2018 was the first one that kind of like changed things. And so what I want to talk about next is like a little look back about how that happened because I think that really informs what we should do in the future. And this is also again kind of like why we didn't pick a policy back then is we wanted to be able to see how it went and then think about the problem later, rather than trying to invent it beforehand. So let's talk a little bit about rest 2018 and kind of how it went. So I think overall rest 2018 was a success. We achieved our goals, even though it was a ton of work, but we did ship the addition, it did happen. We managed to understand this was different than rest 2.0 and they didn't like run away thinking that we had totally destroyed our stability guarantees. Obviously, there were some people that were kind of not happy that we did any sort of changes but you can't please everyone all the time. But the fact that it was a different kind of mechanism helped people understand what we were trying to accomplish. And I would say that there are not any real major issues with the addition system itself. At this point, there's some tweaks and things, but like, you know, the actual rollout of the implementation went smoothly for the most part. It was a lot of work. It was a really big project for all the teams. We never really had undertaken such a big project before other than maybe rest 1.0, which is again sort of kind of like an addition. And so we managed to do it and that's positive and that I think is worth like celebrating. However, rest 2018 was also not a complete success in my opinion. So there's kind of two different ways that I think that really struggled. The first one was the schedule and the second one was the team. So we didn't ship everything that we wanted to ship in the 2018 edition. Some things didn't actually get finished. Some things changed significantly and scope had to be cut drastically in order to make it under the release line. And so while we did ship the mechanism and it was successful, it just barely happened. The release, as I'm going to talk about schedules in a second, happened in December of 2018. We actually did not have another Rust release. It happened literally at the last moment. And so that also is, you know, a sign that things weren't necessarily as ideal as they could have been. Secondly, it was the human cost of the addition, the team. Tons of people put in tons of work for a really long time to make this happen. And it was extremely high stakes because this was a really big thing and it was the first time we'd ever done it. So that contributed to a lot of burnout amongst contributors, I believe. I can only truly speak for myself. I was a total freaking mess by the time the 2018 edition actually happened. And I wasn't even the one implementing a lot of this stuff. I was just trying to like keep the book going and do some other work. So other people worked a lot longer and harder than I did. And I felt terrible, honestly. So I can only imagine with other people, you know, how they felt about the schedule. So we did it, but like at what cost. And so, yeah, the 2018 edition shipped on December 6th, 2018 with Rust 1.31. And what's kind of funny about the half shipping thing too is like some actual changes didn't ship until 1.32, but the addition self-shipped in 1.31. So, yeah, we shipped a bunch of different things. This is a screenshot from the blog post. So non-lexical lifetimes happened in 2018. Module system changes got simpler in 2018. We did some more elision stuff. Constant functions became a thing. Things like Rust fix got added, a whole bunch of lints like Clippy was able to do its job on stable. And that was a big deal. Documentation got updated. We had the new domain working groups and new website. Tons of things happened in the standard library. Lots of new cargo features. There was just like all sorts of stuff. Like 2018 was huge. It's a really big deal. We shipped a lot of it and that's deserved celebrating. But as I said, it was behind schedule. So the initial RFC had sort of this schedule. There's this idea of a preview period, which is kind of like you can think about the addition being unstable. So the idea was that like, okay, Rust 123 is going to start shipping a preview of the addition. And then in 127, we're going to like nail everything down and actually ship it. And that'll be when it comes out. So to put some dates on that, 123 was January 4th in 2018. And the final 127 release was going to be June 21st, 2018. So halfway through the year. And if you're paying attention to a moment earlier, that was very different than what actually happened. So what actually happened was the changes landed in the nightly compiler on the 6th of February. So that was almost one month later than the initial release was planned. And then finally, the actual release was in December, which is like six months, June to December, something like that. A long time later. And we kept thinking it was going to be in like October and then that slipped and then November and then slipped and then December. We finally got it out the door. So we almost missed the date itself, which is intense. I think that this happened because we tried to do too much. Partially that was because there was a lot of work to do like, Rust 1.0 was a really small release. And there was a lot of things that really needed to be tweaked. We had this opportunity to do it. So we committed to it and we made it happen. I think another part of sort of some of the problems was some of these ideas, we tried to move too far in front of the community. So like some of the things around, for example, the anonymous lifetime were added. And I'm not even sure that people like know about the anonymous lifetime or use it exactly because it kind of got lost in the shuffle a little bit. But we had some ideas about some patterns and they got scaled back and then some things got sort of like cut or barely shipped. And we tried to like move too far ahead and think about like what things should happen rather than taking what happened and like getting the good parts of it. There's always a balance to be had there. I think that added some stress. The module system was another example of this where we like kind of invented what we wanted and then there's a lot of changes, a lot of feedback. And it was really, really difficult to get that through. Finally, the other reason why I think we bit off a little more than we can chew. So we have, we're really, really great that we have so many contributors to Rust and we did at that time. But also contributors are not employees. And it's much harder to plan a big initiative that spans a whole year when you can't be guaranteed that there's like these people that have full time amounts of work to work on it. Like just from a project management perspective, you know, people are free to come and go as they please on the Rust project. But that also means that when you're trying to like get a huge thing shipped on a tight deadline, it's difficult to be able to rely on people having the ability to do stuff. And you know, even if you are an employee, like I said, I got burned out. I didn't want to do this anymore to some degree. And so, you know, it can just be really difficult with these kind of big initiatives in an open source world. I don't think that anyone has figured out how to really accomplish this yet. Another positive thing was we proved that the mechanism actually worked. So 2015 and 2018 interoperate just fine. The plan was good. It happened. You know, you don't have to worry about this thing. We can have our cake and eat it too, which is the thing we always try to do in the Rust world. We didn't split the ecosystem. There's not a holdout of people that use 2015 that like are sequestered from the rest of the community. And so that's like really important. There are still people that program the 2015 edition today and they can do everything they need to do and it works just fine. So that's I think a really important thing. We've kept some coherence. And I think that like it works so well because it's pretty much silent. Like I don't think most users think about it. It is just a whole ton, which is part of why I wanted to start this talk by putting out all the details because like it just kind of works. You normally think about it, I think for most people and that's a testament to how good the thing that we shipped was. But as a downside, we underestimated the cost of the addition to both the human element, but also to like our users who needed to make these changes happen. So even though we put a lot of time and effort into making upgrading easy or simple, it wasn't actually so in the end. And that's just because there's a lot of moving pieces. So we had Rust fix, which was able to automatically upgrade your code, but like it wasn't perfect. And there were some things that couldn't necessarily update some of the fancier features and you know, people still have to like validate and test and do these kind of things. And so it just takes a lot of time. The compiler did not shift to REST 2018 for a little while. You did other big projects. And I think that's also okay, like expecting everyone to upgrade immediately is part of like an unrealistic expectation. And I don't think we totally had that, but there's definitely some people that thought that should happen overall, like in the community or on the teams, because you know, they worked on smaller projects. But you know, some production users reported that it took them a while to upgrade to 2018 and it was a significant costly kind of upgrade. You know, even if it's easy to do, it still takes time and time is money to companies and companies are the ones that have the biggest Rust projects because they're paying people to work on things full time. And you know, we sort of put a big burden on a lot of our biggest users and that's tough. So I think that's a challenge. I think another thing that happened with the 2018 edition is it became feature driven instead of time boxed. So, you know, normal Rust releases or time box, that is there's a schedule and the release happens, train leaves the station, regardless of whether you make it or not. And so that's kind of what happened. And REST 2018 sort of became designed around features. We sort of said like, hey, what features do we want to happen? And then we figured out how to make them happen on the schedule. But then when they took longer to implement, then they took longer to implement them. Maybe we thought, you know, that became kind of an issue. And so that was I think a lot of the struggle was like, we need to get these features out and there's this time deadline and we have to have it happen for the time. That doesn't happen with normal Rust releases because there's always a new train coming in six weeks. Like part of the reason why we picked the train model in the first place is because we knew that having a yearly or longer release schedule was a problem. So then we decided with the addition system that we were going to do that. So I think that was like a big issue. Kind of felt like lead up to Rust 1.0 where there was this big giant release that happened and everybody worked super hard. You know, this was a huge amount of hard work and herculean efforts by folks and therefore some burnout happened. And so I think that that was a big problem with 2018 was getting too caught up in like, it means this set of features and we're going to ship them on this day. And so a great example of how this did not happen and was very successful is async await. So, you know, we realized that async await was not going to make it in 2018. And so we said, hey, we're going to reserve the keyword so that we have it in the addition, but we're not actually going to release the feature. And so what that meant was that, you know, so 2018 December was when the addition shipped, but we actually shipped the async await in 2019 instead, almost a year later. And I think that worked out really well. And it's a great model for how this should sort of work in the future is that we don't necessarily like, you know, do we don't force everything to happen, and we don't force the schedule of the addition to happen on a feature basis. We kind of say, hey, the future is going to be on this cadence. And, you know, we're going to ship the feature when it happens. So that's important. This is kind of blending into the last part of my talk here, what should we be doing. So this is kind of like my opinion on where we should go from here. Basically, we should have a rest 2021 edition. I know that every talk with a title in it supposed to be no, but the answer is yes. Gotcha. Betterages law of headlines. But I think that we should have a 2021 edition, and we should commit to a train model for additions and we should have one every three years, no matter what. I think that this should be smaller than Russ 2018 was, for a number of reasons. First of all, I don't think we have as much need as we did in 2018. But secondly, like, you know, I think that it will just go smoother if we don't do as much. And I also think that a lot of people are sort of craving stability in the rest world. And so I think that that's like a very positive thing. So a smaller edition would be much smaller than 2018, but I still think we should do it anyway. There's a question from Emmanuel Lima in the chat. Does Missila hire engineers to work on Russi and that should help to manage burnout. No, Missila does pay some people, but it's a very small amount of the overall team. And there's no indication that Missila is like going to hire 10 more people, which is like what we would need. So yes, while they could in theory do that, I don't think the Missila is going to. And I don't always think that just hiring people is always the right solution either, because, you know, it depends. Like we could hire the community and that'd be great because they already know what's going on. But that doesn't necessarily mean that's the right call because like not everybody wants to do Russ's their job and like it's complicated basically. But it's true that work helps diminish burnout, but on some level work can also force you into burnout. Because when it's your job, you have to do it even if you don't want to. So like deadlines are somehow weirdly more stressful with jobs than they are with open source projects, I think. So, you know, like I said, I was a Missila employee at the time of the 2018 edition and I got burned out anyway. So I don't think it's a pure solution. I do think that can help in some cases. Yeah. Okay, so release trains like the core of my argument with the 2018 2020 on edition release trains are good and additions are kind of release. So we should have a train and that should just be the end of it. Like we should no longer pick date based releases. We should just always do trains because they're just far better way to ship software. If six weeks go by without a new release and a new feature in the regular code base, we still put out a new Russ see anyway. Some additional some releases are small, some additions are big. You know, that's just the way that it goes. And so I think that if there's three years go by and we don't need any big breaking changes, it's totally fine. We should still release a new edition. Sprutellel asked a question in the chat that I'm going to be getting to very shortly. So stay tuned. Yeah, like basically I think the train model has been wonderful. And I think that we should do it with additions too. There's a number of reasons I'm not going to get into all of them. But basically, like the shortest version is it's a model that works and as a model that we've demonstrated is feasible. So I think we should just do the same thing, but on a longer schedule. Some people argue that the social part of addition shouldn't matter and that we should only make additions happen when we have the feature need. But I think that that's wrong. Some of it is just the classic arguments back and forth that happened the first time the addition happens, like looking over the past three years matters, getting people outside of rust pay attention stuff matters. And all that stuff still happens. I think the smaller additions are actually a nicer marketing message. So shifting them to somehow more important in some ways than a big breaking change release, because it's like, hey, it's been three years with rust and we don't feel the need that there's any big breaking changes because we think the language is good enough. That is a very strong and powerful marketing. Message to people to be able to want to check things out. People are still like leaving comments on the internet all the time about like, I don't know that rust is actually stable because of the six week cadence like people don't necessarily know how much time and effort we put into making sure those things are stable. And so, you know, every six weeks seems fast. So people get on the idea of like, oh, I only deal with rust every three years feels better to a large number of people. So I think that really matters. So, yeah, there's another question in twitch, I will get to in one second. So, okay, Spoodle here is your question on a slide. So what features should be in rust 2021. I actually don't care personally. There are some features I want, obviously, but I think that we should do it regardless. And I don't think that we need to have a specific feature to justify doing an addition. In fact, I kind of think the argument is stronger purely on the like release engineering and marketing angles and not on the feature based things. Part of this is also because like consistency and scheduling overall is key. If we miss something that misses a train, it can get on the next train and that's fine. But like, if we release additions based on features, and every time we have a new set of features, we have to re litigate this entire conversation. And we already did it first with 2018 edition and we're doing it now to set up this policy. And I would like to not sort of like have the rust team be arguing every three years whether or not we should be releasing an addition or not, but we're arguing like even more often than that. Like, you know, if we do it on a feature basis, then it happens at random kind of times. And we just say like, hey, this is the mechanism that happens on this schedule. Period. Then we can free our time actually working on making rust a better language rather than worrying about this policy. And so I think that that's actually the sort of most important kind of thing. I will answer that question, Fred go afterwards. Why three year cadence. It's just like a nice length of time. C++ has a three year cadence as of late and yearly is too often because like there's not enough stuff to sort of accumulate to make it a big enough deal every year. Five years would be far too long in my mind. So I think three is kind of a nice compromise and it fits similar to C++. And so I think that that happens. Specifically, I bring C++ up because C++ OX I think is a cautionary tale that we didn't really learn enough for for the 2018 edition and it feels very similar to the 2018 edition for me. So the first draft of the standard of C++ was in 2008. C++ O3 was the previous year of the standard. And so Bjarn and some other people were hoping that this would be C++ 08 or 09. So they named it C++ OX. The problem was that they made it on a feature basis and it took them so many years to get through all the details and ended up shipping in 2011. And so people used to joke that it was becoming C++ OA because like they had missed like with 08 or 09 scheme it slipped to 10. So it should be C++ 10, but they had already said OX. So like, you know, this demonstrates some of this kind of like conflict here. And so that's when they decided to move to the train model for the C++ releases was like we're releasing every three years because like this was just not great. And so I think we had a similar kind of learning from 2018. So I think that that matters. I'm going to answer some more of these questions afterwards because I'm almost done with my slides and then we can get into sort of the details. So specifically, and this is kind of like the second part of your questions Brutalel earlier. So there's this comment on the internals form recently like, hey, the roadmap for the year, what things are we actually going to do in 2021. So even though I don't think it matters, should talk a little bit about maybe what I see happening. So this is the text from the original RFC. I'm not going to read all of it, but if you want to go read it, you can. But what we talked about this year is when we prepare for an edition. So the goal should be any changes we make for 2021 are completed by October of 2020. So we should know what's going on. So, you know, it's the end of July right now. So that gives us a couple months, but we should have the plan sort of like in. And also we said like, we've not decided whether we were doing 2020 or not. So we should also decide that we're going to do it. And so like I said, this is my opinion, but technically the project has not decided. So that's also the work to do immediately is and part of deciding to do it is sort of deciding what should be in it too. So there were some things about what might happen in the RFC. We talked about maybe, you know, error handling might be a good thing to address because it's a big topic lately improvements on the trade system or improvements on safe code. But the goal is kind of like to figure out what that is. And so we should have some specifics on exact features kind of soon ish. The language team is at a few discussions on this. And as far as I know, the goal is still to have a specific plan about additions in October. But as I said, we still have to kind of also decide if we even want to do an edition. My audio cut out. Okay. This is not formally done yet. So, you know, we also need to plan that. And so I think that even the stuff that language team is talking about is nothing like 2018. Like the scale is very, very small overall. They're like kind of smaller details about things as an example of one thing that's been talked about. I don't know if this will happen or not or what the chances are even. There's been some discussion that like unsafe functions should have be able to call unsafe code in their bodies without an unsafe block. Or you should need an unsafe block for which order it is. But there's like things like that that are like really tiny edge kind of things and nothing like the module system is being redone. So I think anything that happens in 2021 will be important and matter, but like will be relatively small changes and not like we saw what happened in 2018. There is going to be an RFC on this question of policy. Nico and I were supposed to be working on this, but honestly, I got busy with my new job and Nico did most of the work. So we should have something published relatively soon to answer this question and then we can have this discussion as a community overall because you know it's not the teams do get to decide what they want to decide, but we want feedback from everyone to talk about all the details. So, you know, that kind of matters so you can expect to see an RFC in the near ish future. Talking about the policy of whether we're having an addition or not. And then also the language team, you know, if we decide we're having one language team would be able like what is actually an addition. So that's all going to come over the next few months. So thank you for listening to me talk about that for an hour. That's the end of my presentation and now there's some questions that were in the chat. I definitely want to cover so I'm going to go from the start to the end of the ones that did not answer already. So, okay, there was a question from someone whose name has a lot of numbers and I don't know how to pronounce it I'm sorry. Does it seem reasonable to you to make an addition a snapshot of already shipped features. So keeping features opt in until the next edition while making them shipped and stable as often. I think that's nice in some ways I think that doesn't always work in some other ways. So like what I would sort of like to see is that we have these like features that need an addition break. You're sort of able to opt into the addition on unstable. So it's kind of like that preview idea that happened before so basically like, you know, the feature exists and we're able to try it out on nightly. But you know it's only about tweaking the addition, rather than tweaking all the details. So wherever possible we shouldn't use the addition mechanism like it, making things a breaking change should happen only if they have to. So, you know, talk about the async keyword being a breaking change but I think that like, you know, the union keyword for example was a contextual keyword and we didn't need to do that in addition so we did. We could have and maybe it would have been simpler. But I think that where prefer where we can ship it on all additions we should, and we should only do it like when we actually, you know, need to. So that's, that's the thing that matters. Okay. So, Fred goes to Vester asked a question about why we don't have function overloading so is this historical situation or some design background, kind of not exactly related to my talk other than in theory this is a big feature that would make sense for additions I guess so. I mean one simple example of why rest of that function overloading is no one has ever written an RFC to suggest adding it. Some people have written pre RFCs and some people have done some of the work, but like on its core, most features and rest, you know, don't exist because people didn't do the design work. Now this is a little controversial with this specific feature because like I said some people have gotten up there and I think there's actually an open RFC right now so. Maybe I don't want to say is there's no accepted RFC, but I think that specifically function overloading is interesting because rust does have function overloading today in terms of traits, sort of. So it's not the same as like when you think about, I read a function with different signatures and like you know you figure out that works, but like you can do that through traits, sort of kind of. And that is I think enough for most people that the idea of regular overloading is, you know, a bit too much. I think also this is hard because function overloading, optional arguments and named arguments kind of all are three features that are technically separate, but are something as a language designer you kind of want to consider holistically, and that's a really big design space. And so there's also some people that are interested in this other two features as well. But you kind of really, in my opinion, and I'm not on the Lang team so again this is my personal opinion on this. These are like a large amount of big features that I don't actually think buys you that much personally for me, so I don't really want to see these features added to rust. But that doesn't mean that won't happen like I said, I'm not on the relevant team so my opinion is just contributing as a community member just like anyone else's. But I think it's a huge design space, and I think that most of the benefit is already there. And so I personally don't think that it pulls its weight. One thing I will say on the named arguments front though is with Rust Analyzer putting the names of arguments displayed in the text, I have found them more useful to see that than I would have thought. So it's really made me feel a little better about named arguments than I would have otherwise, but I think that the Rust Analyzer doing that gives you all the benefit without changing the language. So putting a language seems like a lot of work and doesn't really get you a ton of benefits for me personally. Yeah, I think the core challenge is that it's really hard to put all three of those features together and design them all at once. It's a huge space and it's very difficult. So I don't blame anyone that cares about this feature that is put in some of the work. I don't blame anyone that's not made it happen because it's just it's a really big topic and really big area. It's really hard. Okay, so floor B is talking about being in the only make additions happen when there's a need camp. A big part of that is that the social and technical aspect are conflated. Would it make sense to separate them more? The addition is a social event in addition is in cargo.toml. I do think that this is like a coherent point of view, but I just personally disagree. Part of that is because you need the social side to advertise the technical side. They're inherently intertwined. And I know as programmers, we really like to separate out things and we want everything to be kind of in its own little box. But I think that specifically, like with big major changes to the language and ones that are breaking, you're going to have to learn about and stuff. There has to be some sort of social component of letting people know that those things exist. And so like, yeah, you could argue that like we should celebrate what we've done on a different cadence than the release of the actual breaking changes. But like, that just means we have kind of like two major points where we need a bunch of communication. And I think that it all just works better by leaning into the fact that they're actually not separate. I do know that there are some people that feel very strongly that I'm wrong on this and that's also totally fine. Just for me personally, I think that they're not it's not really possible to unwind them and it's actually a good thing that they're together. Because like you need to be able to communicate those things and make it work out. So I don't know. Okay. Jan one Garner says should the six week cycle changes also be listed in the 2021 announcement to be closer to the other language release. So yeah, so what we did, like specifically was that so that blog post that I showed the title of it included the chapter of the changes that were in the things all together. You could argue that maybe they should be like two separate ones. Kind of what we did last time was we blogged before the addition actually happened to kind of talk about what was going to happen. And then the release post was kind of like more of, you know, describing the changes in detail. So it's kind of a way to split that up a little bit. To some degree, I think that when you start to separate it out too much, you get into the exact problem that we're talking about the last question, which is like, I think they need to be around each other because you want to communicate them at the same time. So I do think that including them all in one thing or a small, maybe like in like a month kind of area sort of thing. But yeah, I hope that answers your question. Jack asks, would it be Jack asks to be clear. Would it be another three years till it's available or just available in the next Russi. So like, basically, I think that we should be flexible a little bit about the exact time when we stabilize the addition. Like I don't think we should commit to a specific release because I think that's what introduces like a lot of the problems. It should just be some time in that year. And so I think this is part of also why it's like the Lang team has to have its stuff planned by October of this year, because we want to be able to release it sometime next year. And so if we have stuff done in 2020, then there's a whole year's worth of time to have the schedule happen, rather than trying to, you know, say, OK, we're going to start thinking about this at the beginning of the year, we're going to ship it by the end of the year. Just like if we're ready in advance that that like helps alleviate that schedule pain. Excuse me. Okay, also, oops. Oh, talking too long. Also, wow, okay, guess I just totally lost my voice. Sorry. All right. So also, I think that having the time before things are released matters because putting something in addition that has a breaking change. As I said, it impacts people a lot. So we should do so very carefully. And so I think that one of the sort of like problems of 2018 too was like we had these grand plans. We threw them all in together at the last minute. In some sense, like, we should be more deliberate about what goes in the addition. And therefore we should have to plan it earlier in advance, because that means that we only are willing to do it for things that really, really matter. And so I also think that things should be planned ahead of time because of that. So, yeah, so it should land in Russia at some point during the year of the three years, but I don't think the specifics matter that much. And so another comment on that question was, what does it mean if something, well, audio, there we go. What does it mean if something misses the train? Like say we had a GC, which is not a thing that can happen, as you say, but it missed the train would be another three years. Yeah, so it would have to be another three years unless what missing the train would mean would mean that the stuff that needs to be backwards and compatible would miss the release. So imagine if we did not make async a keyword in 2018, it would have to be a keyword in 2021. And that would mean that like async await couldn't have shipped yet because the time wouldn't have happened. So I think the feature implementation missing the train is totally fine. Like that's what happened with async, right? Like the breaking change made it in, but the implementation didn't happen till later. Great. I think it's totally fine. And in fact, I think is the model that should kind of happen is that like the stuff that is breaking that's not ready yet. We don't worry about making sure that it's actually happened exactly by, you know, the time that really happens. And some of this, I think is because too, it takes time for the ecosystem to catch up. So for example, with async, like async, even if async await had landed on the day of Russ 2018, the ecosystem would not be ready yet. And so for most people, like it's already been almost a year since async await happened, time flies. And the ecosystem is really still only now finally getting into a production ready state. Like async await wasn't really a 2018 feature. It was really a 2021 feature. And so I think that like really a lot of the addition stuff should be looking back over the last three years rather than looking forward to the new stuff. And so like in some sense, the stuff that we reserve in this edition, you know, gets used by folks and like is only really part of Rust World until the future. And I think there's some weird interplay there. And that's also why it's kind of hard to do these kind of bigger macro level changes and thinking about things. So I hope that answers that question. Okay. Angela Feverett says, has been in advances with having a unified async runtime instead of having conflicting runtimes like Tokyo and Actix. So yes and no, like, I think that there's two things that are tied up here. One is that I don't think it's possible to have a unified runtime for everything because Russ is used in too many places. So for example, like, I'm a big fan of both of those projects. But I also do embedded Rust work now. And there's some stuff that maybe would be good as asynchronous. I'm not going to be running Tokyo on like my embedded device. So I would need an embedded executor that makes very different tradeoffs than Tokyo. Tokyo is built for network services specifically. So it's made several tradeoffs to make lots of sense for those things, but it wouldn't make sense for my microcontroller. And the same token, like if I was building a web service, I wouldn't want to use my microcontroller async await in pull because it made the wrong tradeoffs for me. So I think that it's not possible for what what rust is trying to accomplish to have a unified runtime. That said, it should be possible to make libraries run time agnostic. So it doesn't matter what you run time I'm using. And that's like kind of the open things that's still being worked on. And, you know, like, I don't think this is the kind of thing that happens from a big top down perspective. I think that it just takes time in the ecosystem for folks to hash out what they need and what's going on. And we compare experiences and we consolidate where possible. So there hasn't been any big news on that front lately, but like all the people involved are still working on it. And I think everyone cares about making things agnostic, at least conceptually, because nobody wants things to be locked down in a one specific area. So it's just just really tricky. But a lot of a lot of libraries are agnostic today, but not as many as there could be. And there's still more work to do there for sure. All right, another question from League League 11. Is there any intent to sync the addition cycle with major distros and or windows major release cycles, because that might be useful for you know, CentOS, Debian, etc. Not really, because the problem is that all of those things have different cycles and you can't really release them on the same. Like you're going to leave some people out no matter what. So really, we kind of just got to pick our own schedule and just kind of be done with it. And that's sort of unfortunate. I definitely agree with what you're getting at. That would be really nice. But just like, I, it's not really possible, basically. And yeah, I am doing great now. Thank you. It took a while to get out of burnout. So best question. I basically like there's only 15 minutes left and I almost lost my voice, as I said, so still more questions I've not answered. I apologize and not get on to them. But we pretty much don't really have a ton of time. So I'm going to now pick somebody who asked a great question to sort of raffle off the book. So, you know, there's a we're going to leave it sort of that. So let me think here what I think. Let's, let's go with Flora B's question. So I think that it was big and it was on topic with the topic and it was like against what I'm saying, which is always fun, asked in a nice way. And I think that matters. So Flora B on Twitch, you're getting picked as my best question, although I do appreciate all of them. And I believe that means you get a copy of Rust in action. If I remember the details, right? So, so that's cool. The last couple of questions I didn't really get into. Sorry about that. But if you want to email me, I'm happy to talk over email or, you know, whatever else. So thank you so much for listening, everybody. My voice is dying. So I gotta go.
|
In 2018, Rust adopted an "edition" system. This lets Rust evolve in ways that feel like breaking changes but are opt-in only, and that do not disturb the open-source ecosystem. Given that Rust 2018 happened three years after the initial 2015 release of Rust, this has everyone wondering: is 2021 the year we have our next edition? In this talk, Steve lays out his own feelings on this question, as well as talks about the history of the edition system, how it works, and what it might look like in 2021.
|
10.5446/52206 (DOI)
|
Hello, good morning, good evening, good afternoon. I'm speaking to you from before dawn in New Zealand and I am absolutely delighted to be part of Rusty Days. I think it's fantastic that we've been able to take, although the Rust Warsaw team has been able to take what has been quite a negative pandemic and turn it into a global event, which is perfect. I should begin. So, just to reiterate, I'm sure this will come through on the stream. More than happy to take questions via any channel via Twitch or YouTube or I think there's a third channel also. And just ask questions online and I will try to answer them as we go. The team is actually sitting behind me monitoring all of those streams. So that's great. Okay, where should we go? We'll start with moving something. Why is my screen not there we are. Introduction. Who am I? I tweet about Rust on some clicks. I spend a whole bunch of time wasting time about Rust on Reddit. I do live coding on Twitch. I make videos on YouTube. I write books. So I've actually written this thing called Rust in Action. One of the reasons why I do that is that I've kind of taken it upon myself to shorten everyone's learning journey by 100 hours. Now, if you have just decided to come to this talk because you want to know how to apply unsafe to your own project, I've decided to just give you the answer. We'll give you the answer straight away. So you don't have to watch a whole hour of talking. Firstly, at the top of your crate, add an annotation which is deny unsafe code. This will, as we'll demonstrate in some other projects, this will prevent the compiler from allowing you to use unsafe unless you have been very, very explicit and told opt it in. The other one is that before or within an unsafe block, you need to explain why it is that this code is safe. The compiler is no longer working for you. And so you need to do the work of the compiler for any future Rust program, including yourself, who might come along and wonder why on earth is this safe. And so one way which I would recommend when you're doing the code review is to ask them, do you understand? Can you explain to me why that is safe? And if they cannot, then either the code needs to change or the comment needs to change. Our objective as Rust programmers or one of the objectives, let's say, or the objective right now is safety. And let's all remember while we're going through this process that other people make mistakes, right? We never make mistakes, but other people do. So how do we prevent their mistakes from infecting our code? We need to create a system of software engineering that makes it extremely hard for stressed, overworked, and maybe distracted individuals to do the wrong thing. We need to create the system as team leads, as engineering managers, as junior developers. We need to participate in a way that gets the, it learns our objection. So we want to learn about how other projects are managing risk. But first I would really like to take the time to talk about lemons and limes. Now if you speak English as a second language, this may sound very strange, but this is the story about how the British Navy, Britain in particular, understood in the 18th century how to cure scurvy. And by the 20th century, they had completely forgot it. In fact, the scientific advice at the start of the 20th century was so bad that it caused scurvy and the Antarctic and expeditions quite famously and led to some horrendous tragedy. And so there were many reasons why this occurred. And one of the main reasons though is that the, well, a contributing factor was that the English word lime included lemons also at the time at which the cure was found, which was use just drink lemon juice, you know, sprinkle some lemon into water and just drink that. And so I want to reiterate that if your code comments cannot be understood by your audience, that is, then they need to change. And it doesn't the surface, it isn't the words themselves that are important. It's the meaning behind it. People who are reading your code need to understand why it is that the code that you have written is safe. You need to do the work of the compiler for it. Just a warning. This allowing unsafe is actually insufficient to guarantee safety. Unfortunately Rust still has some problems. Well, no, it doesn't have problems, it just has something to be aware of. It is actually possible to generate code that is guaranteed to crash your program using only safe code. Now this is a ridiculous code example. No one is ever going to wrap a vector of type T with another container, but maybe you're doing something stupid. Maybe you're doing something and you don't realize that you've created the situation where two of your primitive types maybe inside a struct actually are an invariant on the other one. So that is the position, this position variable here relies on the fact like there's an intimate relationship between storage and position that the compiler cannot guarantee. And so by mistake, let's say I've written some code that breaks the link between two of the primitive types. In this case, I can set position to something that is unreachable and then when the next call, like if someone calls get, this will break and it will crash the entire program. Now obviously there are ways to get around this, I could have replaced this index notation with forget method and that would return an option. But this is completely safe code and it is 100% guaranteed crash. Just before we get to the project, I also thought I should explain a little bit around my methodology if there was any methodology. I wanted to talk a little bit about the rationale about why I did this. I was really disappointed with the Rust communities response to the Aptics web unsafe. Basically driving, if you've been around the Rust community a little bit, you'll be familiar with this. But if you're very new, one very famous example of unsafe or the use of unsafe code or was this project called Aptics web where the developer happily used unsafe blocks and then when people said look you can't do this, there's no requirement. Eventually that person was driven out of the Rust community because of this kind of cultural difference. That made me think, what is the right way to do this? If that is the wrong way, I think it was unfair to drive that person away. The two main aims I think are to understand what it is that professional companies, people that are paid lots of money to write very good software and doing very hard things with Rust, like what do they do? I would also like to, I just wanted to do some research to justify doing more research. I've got several ideas about how to do this. What do they do? Just to pause slightly, doing qualitative research, not quantitative. What that means is I'm not looking for, I haven't hacked the compiler to be able to do an analysis. This is quite, I've used a lot of interpretation here. It's not an analysis of every Rust rate. It's kind of looking very closely at 10. They were basically a sample of products or projects that I thought were interesting. I tried to create a sample of open source from the GNOME project all the way through to Amazon and Microsoft, these big, big companies as well. I was looking primarily at their documentation for new contributors. Then I would go in to look at their code snippets, searching for unsafe inside their repository. Now, I intentionally did not communicate with any of these projects that I was doing this. None of these projects are aware that actually they have been part of this project. Let's have a look. Now, Servo. Servo is Rust's foundational project. It was kind of the recent way it was created to create a parallel web browser. It's fascinating looking through the GitHub because a lot of their documentation around how to write code is written in about 2013. It's quite old for a Rust point of view. One thing I found fascinating is that they include this annotation of their crates, which is, oh, sorry, the annotation that they use is deny, but at the start of any function that uses unsafe within it, or any module that requires unsafe, they are required to allow it and opt in. So this, I think, is a really nice strategy for increasing the psychological barrier to including unsafe. Now, just to provide a, this is again another convoluted example, but this is a demonstration of what happens inside Servo. So you can imagine that they have modules, and the module itself has this annotation that we deny unsafe. The deny attribute allows programmers to later on annotate internal things with allow. So you can basically, you're opting out of unsafe here, but you can opt back in if you really need it. The reason why I think this is quite an interesting strategy is that I just feel like it's harder to, and it's harder to do mentally, and it would, there's no way it would pass code of view. I think unsafe would, it's unlikely to pass code of view, but this ugly annotation syntax, there's no way that would get through. I was also curious as to what Kaga Geiger does internally as well. So Kaga Geiger is a cargo extension, which actually inspects your own code and all of the code of your dependencies for usages of unsafe. But how do they do it themselves? Like this, for me, was really, really interesting. They've gone further than deny unsafe. They've actually said forbid. Now the forbid keyword or the forbid attribute does not allow you to annotate internal methods as allow unsafe. So it just entails the entire, the compiler that it's completely illegal. Future programmers in the project will only be able to include unsafe blocks if somehow the project decides to remove this annotation from the root of their crate. So effectively the way this looks in code is we add forbid, and then it's impossible to, the only way to compile this dangerous function is by commenting it out. We cannot opt in to allow. The compiler will refuse to compile the code. Again kind of looking into some of Rust's ecosystem, well, some of the long-standing utilities, I wanted to kind of get a scene system whether or not the cultural, I wanted to know if the unsafe user should change. So XA is a replacement for the LS command which is a UNIX utility, and it's one of the Rust community's oldest command line utilities that is in public use. So XA talks to a file system, and it does that via system calls. It doesn't need many. It does not use much unsafe at all, but for extended attributes inside, it requires this list extra attribute, SQL family, on Linux and Mac OS. So the strategy that they have developed is to employ, to only wrap the minimum of what they need. So basically every single function that they wish to call, now I'll explain this in text very quickly in just a moment. But effectively this is the Rust code, and all they are doing is wrapping the C, like wrapping the C function. And so the strategy there is to put unsafe around the smallest element possible. And the idea is, I assume, to make it very, very understandable about what is the purpose of unsafe. And in this case, we are, the reason why we need unsafe is because the Rust compiler cannot reason about what happens inside the operating system. And so it just requires you, we just need to expect that the operating system is going to be well behaved. Now going back to that comment around, we need to understand why this is safe. If you haven't used pointer syntax, this is probably confusing. I want to, first of all, we have a whole bunch of types that are not you really used in traditional Rust code. If you are using libc, if you are using any FFI, you have probably seen that before. And if you have programmed in C, this probably makes some sense. We take a path, so that the, a pointer to effectively, in Rust syntax, this would be a veck of u8. We have got a reference to C string. A C string is like a veck of u8 with a null byte at the end. And then we are creating a null pointer, which in the convention of C programmers is to spend a null pointer, kind of how a Rust programmer would use an option, zero. I'm not sure what that does. So that is, I think, size. And we have an integer being used as flags, and the way that that works is that in C, a convention is that each bit represents an on off switch, and so that is what that is going to be used. So, bear in mind, it is important to think about whether or not your team is familiar with this kind of code. If it is, maybe it doesn't need comments. But if you have team members, or you might have new team members that are less familiar with this type of syntax, be verbose. So this is where it came from. Another example that I think is quite interesting is Blake 3. So Blake 3 is this new cryptographic hash function that is supposed to be really, really fast and also very good. Oh, I think I have just received a, I haven't received a comment. Oh, by the way, if you're watching this live, do ask questions. More than happy to receive them as we're going through. You have stopped using MD5, just asking that question. It's no longer, it's no longer best practice. So why is Unsafe needed in this project? Well, Blake 3 wants to make use of very high performance functionality within the CPU. And that requires an intrinsic, access to intrinsics. We have Victor instructions that can operate on more than one element at a time. So they have used the minimal brand strategy as well. Now this is a kind of like a, like a Victor, well, actually it was closer to an array that has a width of 256 bits of integers. And we call this kind of crazy thing in the middle. This is what the Unsafe, this is the function that's provided by, and if you've used, making no sense. This is the function that is provided by the compiler when you opt into Intrindex. And from the Rust code, we only see add. A lot of the complexity is hidden from us. And I think this is a really, really interesting strategy. Now one thing that the authors of the crate have decided to spend extra attention on is that when we use pointers in unsafe blocks, that's especially dangerous and we need to be especially careful. And so they've made extra effort to annotate those sections with code comments. Now I get into a project from Amazon and this is the foundation for the AWS Lambda and AWS Fargate projects. And it's a, so first we'll start with a question like why do they need Unsafe? Well they're interacting with the hypervisor. They're basically building an operating system manager. And they also use a lot of Chrome OS inside their own project. And so they've actually got a lot of code generation. And so their strategy, and I don't mean to blame them. It's automatic code that includes Unsafe. And so there's just this comment there saying this is automatically generated. And so that's their strategy for a lot of the uses. Just a comment saying, I don't know why. This is safe. The, I wanted to call out their contributing mark down file. Sorry, it hasn't rendered very nicely on the screen. The point I want to make here is that they have a big document which explains why you should or how to contribute. But it doesn't address Unsafe at all. It talks about code comments and so forth and pull requests and unit tests. And that's all great. But this is an operating system project and they don't mention Unsafe in any of their guidelines. Windows, sorry, Microsoft is developing a language prediction, otherwise known as a Rust interface for the Windows runtime, which I think is ridiculously exciting. Like it's amazing to see that Rust's usage across operating systems is first class. So thanks to Microsoft. Obviously, they speak to Windows APIs where there are not that is like kernel 32 DLL or whichever interface that they use. They need to trust the operating system and so they have to use Unsafe. So their strategy again are these minimal wrappers. Done slightly differently. So previously, so this is a bit of confusing method to see. What we're dealing with here is an array of type T, which happens to be generated by the Windows runtime. It allows you to create. So what we're trying to do in this input block is create methods that allow you to create objects that the Windows runtime understands. Now there are several other methods as well, but I want to call out with Lent because I thought this is where we have Unsafe. And so the, so this is a create, we're creating a new object here. So this is kind of like the with capacity only it's a bit with capacity, but here we're creating an array with the length of guaranteed to be let's say 1024 or something. We want to make sure first there's a session which is good practice. And then we're saying that this is this co task MIME-LOK is the call that we need to use, which is fine. But I am personally, I mean, if you are a Windows Microsoft programmer, you probably understand what this means like innately, but it feels like to me as someone who's looking at the code fresh that this unsafe block is doing quite a lot of work inside it. And so what we're doing is we are calling this part of the Windows API. And we've got our link. So that's the number of elements in our array. And then we multiply that by the size of memory of our type. And so, you know, you can kind of see where that's one function. And then this return something, and then we coerce it to a pointer to two. So maybe if you are familiar with systems programming, this comes very naturally to you. But I think that if something were to happen inside of one of my projects, I would have expected that I would have provided some explanation about what is happening. But again, maybe if you are developing systems programming, like if you're developing operating systems, this stuff is so natural that you don't need to. I'm not sure. I do know as well that they provide annotations as to describe why it is that the operation is safe. And so we're actually writing to the pointer to starting at zero, I assume. So this is a point data as a pointer. So we just created it there. I assume it's, we started zero and we write length. I'm wondering, and I don't know enough about this, but I assume the difference, interestingly, we got length multiplied by system, but here I'm using length. I wonder whether or not that's a bug. I'm not sure. I'd be interested to hear from anyone. Can I enlarge the code? Yes, yes, I can. Hi, read me, Mark. So what I really like is the Windows, the Windows, sorry, the Microsoft team believes that they want to create this runtime within 100% pure safe rust, but they make this call out that sometimes they have to go and talk to the APIs and they are implemented in C++. And they make safety as like a first class citizen in their readme, which I think is a really positive sign. Now a project from the GNOME project, a project within GNOME, is the rewriting lib, I think lib, a C library in Rust. So this is a CSV renderer and the question becomes, why do they need NSE? Well, this crate, lib, Rust, RSVG, it talks to GLIP. GLIP is kind of a core of GNOME in some sense and it is like another C library. So again, you can see this pattern of requiring, we're building out new functionality in Rust and we need to rely on older implementations, or not older, but preexisting code that is written in unsafe languages. And also Rust itself, sorry, the Rust code itself exposes the same C API as the library that preceded it. One thing that I think is really positive in this project is that the entire culture is focused on questioning whether or not unsafe is a valid thing to do. Now this is a code comment from one of their code reviews inside a merge request, which is GitLab's version of pull requests. And the project lead there makes this really good point, which is why are you using an unsafe construct? Because I think that we could use a safer option instead. Now I found this inside their code after looking through their commits and I think it's really positive to see that people have got in their heads this idea that if we, when we can avoid unsafe, we should do so. Because the compiler is, the compiler does not get tired. The compiler does not get distracted. And the compiler can have bugs, I'm sure there are compiler bugs, but Rust's safe Rust is good Rust. We've seen a minimal wrapper strategy before, so I'll just pass on that one. I also wanted to look at the Rust standard library. They have provided explicit advice for code reviews. They require comments around each use of unsafe and they have some tooling, a lint in their continuous integration builds that checks that there is a comment. Obviously the lint probably can't read code comments to check that they relate to the code block, but at least they're there. And there are humans checking these things as well. So I like the last sentence there. Unsafe code actually needs to be okay. Don't put unsafe code in there that is actually unsafe. That is not a good thing to put inside the standard library. Another thing they call out inside their guidelines is that it's okay to ask for help. So we see a mention here of the unsafe code guidelines working group and I'll mention what that is very, very soon, properly. But there are experts inside the Rust language that know a lot about Rust and so if you are unsure, it's okay to ask them for help. In fact, they've made this point that everyone loves debating whether or not there is an unsound problem here. So don't be worried, you're actually making someone's day by being able to get some reasoning here and some expertise. Now if you are a tiny project, this obviously does not apply as equally to you. So we don't have a large team of collaborators and colleagues to call upon. And so my advice there would be to be cautious and to kind of build yourself up into things that you know rather than things that you don't. And don't try to use constructs that you don't understand as what I'm saying there. One thing I think is really, really good is that the public documentation explains why things may panic and if you read the code comments of standard pointer read, you'll see that there is invariance that are described, which is that the source of where you're reading from must be valid for reads. And so, and by the way, you must initialize their value before you actually try and read from a pointer. And so without those, the method is unsafe. And so even if size T has size zero, the pointer must be non-nulled, which is interesting. So I think that including if you have an unsafe method or unsafe function, including a safety section, is a very sound strategy. A further project which is less well known is this thing called toolshade. Toolshade is a memory allocator. It uses an arena strategy or provides an arena for you, which is basically to the operating system, it looks like you've just asked for a large chunk of memory. And inside your program, you can divide that up internally however you want. It's typically faster, but can be slightly less efficient. You might get some wasted space. Now why do they need unsafe? Well, if you're dealing with memory blocks, you probably need to deal with pointers, and that means that you need unsafe. And so the strategy has been to push every usage of unsafe into one specific module, which happens to be arena.rs. Even though the API is completely safe, all of the unsafety is isolated within one module. And so people know if they're touching that module, it's dangerous. And again, we're trying to create a situation where as a team, we are building safe software and mentally we're switched on when we go and interact with that module. So again, here is another project which is far less well known, which I think is really interesting is a new database, or a new graph database actually, being developed out of Ireland. And inside it, the storage engine is written in Rust, but the reasoning engine is written in Prolog. Now they need unsafe because they're interfacing with this Prolog implementation. And now how have they done that? They have gone further than isolating at the module boundary in some sense. They have actually created a third crate, which provides sort of a wrapper and interface completely outside of their project, or their core storage project. So the storage engine, the written in Rust that deals with keeping data and persisting it to disk has no knowledge of the Prolog implementation. It's completely independent. And so if there was a problem with inside that kind of wrapper crate, it wouldn't infect the storage engine. They also have a very strong straight, like a very strong code commenting practice with inside the team. And so almost, you know, seen on the slide, it's a bit difficult. So allow me to explain what's happening here. So inside the code, when they store data to disk or unsigned integers to disk, they compress it using a variable length encoding scheme. There's a public method in Codevec that allocates memory. So we create a vector of size length. And then we call an internal method, really, which is encode unchecked. Even at this wrapper, even at the, this isn't really unsafe. We're just calling an unsafe function or an unsafe method. There's already a safety block saying, we know that we have created our verc with the required length. And so therefore it is safe. And inside the unchecked method, the encode unchecked, there's even more commenting. So this is the internal method. Basically everything that happens inside this method is well indexed. Possibly, you know, we don't necessarily need to say that about incrementing. But what I think that this project gets right is that they want to assume that people are looking at this code with very, very blurry tied eyes. And they're even specifying why we're using an integer, sorry, an index integer, which I think I've never seen before. And this makes it very, very difficult to not be, it makes it very difficult, it makes it very easy to understand what the code is doing and very hard to get it wrong, in my opinion. Okay. We're coming up to one of the final projects that I want to touch on. And this is Fuchsia from Google. So the Fuchsia kernel is actually written, it's not written in Rust. Rust is used in kind of systems components. But the kernel I believe is written in C++ but might be POS. Their team, they have got code documentation that makes it very clear that if you're writing adding unsafe to the code, you need to ensure that it's safe. It's your responsibility. And it's essential that you identify any assumptions that are required by every unsafe block. You need to ensure that those assumptions are actually met, not just that you've identified them but actually that in this case that they've been met. And over time, it's possible for those assumptions to continue to be met. Now, this is a really interesting bullet point to add because it means that the program today is responsible for thinking about how people might use this in the future. And this becomes even more explicit soon. Oh, right. Sorry. I just added the class. Oh, well. I just, yeah. We'll carry on. So one of the things that is very clear by now is that projects that do this well, use unsafe properly, I said properly, add comments to the unsafe blocks to explain why it is safe. And if you're inside the future project, you also require a comment explaining what assumptions are being made. Like previously, we saw that we made an assumption that the length being provided to a pointer read or pointer write was of sufficient length. And so that seems relatively clear. But this documentation is actually available in public for every contributor to the project. Oh. Sorry about the rendering of this slide. Where possible. So this is a very good explanation for the minimal wrappers strategy. So where possible, package any unsafety into a single functional module. And then document what it is that needs to be the case before it becomes safe, how things fail, and how things, what happens if everything succeeds. Now this seems like a very, very sound strategy to me. The future always project has the most robust guidelines in my opinion for writing unsafe blocks of code safely. Partially for this reason. I apologize for how this is rendered on the screen. So there's a couple of sections here. So there are three particular types that the documentation calls out as being particularly dangerous. Star const, star nut, or unsafe cell. So these are pointer types. They are specifically called out as things that need to be very heavily documented. And the comment on the bottom is talking about memory aliasing and that you can either, in Rust you can either alias, so that means you can have two shared, two references. It's got a read only references. So this is the ampersand syntax. You can have multiple of those. You can have multiple readers, but you can only have a single writer. And you need to explain that if you have been able to, if you've used one of these unsafe types that you have uphold that guarantee that is provided by the Rust compiler. Obviously. So now these resources you can't click on the links. But you can definitely look them up. And I will ensure that links are provided for anyone that would like them. And just thinking about the way to get them there. But I'd like to call out the future OS team, Brian Anderson. Ralph Young has provided two fantastic articles. In particular was the person who created that example of having two types that are both safe and of themselves. But when they relate to each other and you break that connection, you can create unsafety inside Safe Rust. There's a really nice guide of Rust patterns within the Rust unofficial repository that talks about condensing or containing or isolating unsafety within small modules or and lastly the unsafe working groups unsafe code guidelines for reference, which I want to which I would like to see if I can bring up now. So where are we? So inside the Rustlang org is a unsafe code guidelines repository. And you may notice this author Ralph Young is the same person who wrote those two fantastic articles about unsafety and Rust. Now this is a very good description of the things that you need to be aware of if you are writing unsafe code and you want to be able to. It isn't quite as mature as I expected it to be. But I think it is developing. And if this is an area that you're interested in, I believe that you should continue to, I think you should participate. So it's the main output of the group is a reference document. And I'm sure that there are many patches welcome from anyone inside the community. And so that is, I think that is the talk actually. I'm more than happy to answer any questions that people have. But just a moment again to say thank you very much for the organizers of the conference. I am very privileged to be part of it, even from the other side of the world. And so I am more than happy to stay online. And again, answering questions that come through. I'll try to get a link. I'll try to post that in the YouTube comments to where I have the slides and so forth. And yeah, ask away. Ask away in any of the channels that you're watching the stream. Ask in the comments. And those comments are being monitored. I'm just sending that organizers a link of the slides that I have as well as the research that I've done on all the projects, because there's a little bit of more material there that you might be interested in. So I've received our first question, which I think is a really fascinating one. And it's from YouTube. And I believe it's, I'm going to get my English pronunciation is going to be terrible at this name. It's going to be C.C. Karras, what would be your advice to new Rust programmers who are too happy to use unsafe? So if I re, if I re, so the question there is you have someone who is new to Rust, maybe they have a lot of experience, or they've just graduated from university, and they look at unsafe and they say, Oh, using unsafe just means I can apply the same idioms that I am used to in C++ or in some other language. And my advice would be that the reason is to ask them to step back and say, well, why are you learning Rust? You know, you can write, you can use pointers manually in other languages. And in fact, in a way that is much more ergonomic, Rust makes it very fussy. It's not nice to use unsafe, like use pointers in Rust. Like I think it's done intentionally. It's intentionally clumsy in some sense. And if you have come to Rust because you want to write in a safe programming language without a garbage collector, then use the compiler to your advantage. So there is a second question from YouTube. Again, I apologize for my English pronunciation, especially the stress on the wrong part of the word. But Michel Lesovsky, great lecture. Thank you. Which strategy is the best in your opinion? I am not an expert in this. I would defer to the unsafe working group. But if you were going to ask me for my advice, it would be to add a comment to each use of unsafe and that will force you to mentally check. It will just slow you down and cause you to pause to double check that what you are doing is sensible. That would be my one takeaway. From Twitch. You have covered calling unsafe code and side functions. What about creating functions which are themselves safe? Do I have any thoughts on that? I do. And that is, I could potentially try and scroll all the way through. So where was I? I was over here. I think it was XR. It has created a function, like a Rust function around a C function. It is a safe function around an unsafe function. That wasn't quite what I want to do. I believe, no, that is not right either. My strategy for this is to write minimal wrappers around and create functions that are very, very easy to understand. Another question from Twitch. Thanks for the talk. You mentioned some lints we can use to assist developers with maintaining code well and safe. Do you have any changes to the language or any existing lints that you would like to see introduced to make this easier and more reliable? I would say to that, use Clippy. So that is, we can find it. Rust Clippy. There we go. Now, Rust Clippy is a community repository of good practices. Hundreds and hundreds of these things. And I love this code that is just wrong. Do not use this in your own thing. So now, I expect that the lint that this code comment linting that is applied by, I think it was the, one of the projects anyway, I think it was the standard library. I assume that they have upstreamed the lint into Clippy. This is where I would go and if they haven't, well, you know, there's a job for this weekend. Now, oh, this is great. Again from Twitch. MROWQA. Why do you laugh so much? I love it. I laugh because I'm nervous partially. I laugh because I try to remember that programming is primarily creative endeavor. And it's okay to learn. It's okay to make mistakes. It's okay to grow. And that means that we need to do the, we need to learn in a way that is supportive. And so there's no sense. And I don't want to start telling people or dictate exactly how people should code in there. And I don't want people to feel as if they are inadequate anyway. One of the things, one of the reasons that I wrote the program and Rust and learned and took the time to learn Rust and one of the reasons why I teach Rust is that it empowers everyone to write strong, safe software. Even me, like I am a mediocre programmer in many senses. Yes, and I developed in Python for a long, long time and probably over a decade and spent a long time in data sites. Every piece of documentation that I read around writing C extensions to make Python go faster was the first paragraph almost said, only expert should do this. Or this is dangerous. Or some sort of language like that. And I was always intimidated. Rust was the very, that was the first systems programming language community that made me feel welcome. And I want other people to feel welcome as well. It doesn't matter where you come from. It doesn't matter whether or not you are from the San Francisco, you know, from the Bay area or whether you are from South Asia. You should feel like a participant and you have a place at this table. And I want everyone to feel welcome. Okay. A question from YouTube. From Andrew Bogus, if my unsafe code is unsale but works, for example, it might be faster. Should I change it for a slower but something that is guaranteed, a slower version that is guaranteed to be sound? We have an example of this in the standard library. Or it not so much. It probably is the case that if you have something that is fast but dangerous, that it is, it works for no and good cases. If you set everything up correctly, it will go fine. But if things are set up badly, they might blow up and explode on you. I do not recommend anything that is, like, that takes user input or anything you are going to install on someone else's computer. I do not recommend that you expose them to security vulnerabilities caused by buffer overruns. But if you are the kind of person that likes to play dangerous, here is my advice for you. Create two methods. The first one and the one that you should use primarily is safe code. It is pure rust. That is pure safe rust. Secondly, have another method that has the same signature but has underscore unchecked or underscore unsafe. And describe why it is that when you call this, like, what you have done to ensure that it is safe when you call it. If you know that certain edge cases will cause it to explode, call the safe method. If you might be in a situation where they might occur. But if you know, if you have initialized or if you are doing scientific community, scientific computing, for example, you know all of your data. Intimately, you know exactly how it is laid out, memory and so forth, you can probably guarantee that the edge cases will never occur. I am not smart enough for that. So I would urge you to stick with safe. But if you want to, you know, like, yeah, then that is over to you. Okay. Another question from YouTube. Have you ever had situations where your unsafe code has broken the safety and rust? And if so, can you give an example of how that has been a first? No, I have not. I haven't broken the rust compiler. I know that some other people have. For example, I believe there was a regression in 145 relating to the use of the use of the code. There was a strange edge case relating to propagation of constants like toy code with references. And I haven't experienced that myself. Okay. So YouTube, is it possible to have in your project too many comments? That slide with Terminus DB had quite a lot in my opinion. I agree. I mean, it's very, very difficult to know where the balance is. Perhaps the balance exists. It depends on the maturity of the project. I can imagine that if you know that there are very competent developers and people who are very familiar with the domain that you don't need many comments. But if unsafe is a warning to you, if there are some people they see unsafe, they're like, wow, hey, this is my play area. I love this stuff. I love pointers. I love it. It's all fine to me. Then maybe they don't need comments. But if you're the kind of project that uses unsafe and less than 1% of their code base, probably most projects, and whenever someone sees unsafe, they tense up slightly because they're worried that they're going to crash their own system or potentially introduce security vulnerabilities for their users. Add more comments. I don't know if every single line needs to be commented. I thought it was interesting to find a code in production. At least it provides more robustness. Oh, okay. From Twitch. SolarRuffle. Is that like somehow reflective back on me? The username is SolarRuffle. I really enjoyed your talk. Thanks. But I have one question. What would be some good starting points for developers who are new to lower level programming language who want to learn how to safely write unsafe Rust? Do you know of any resources that teach these skills from a Rust perspective? SolarRuffle, do I have a book for you? I feel very... I don't do this because it's an ad. I apologize if this offends anybody. But I do think that my book is designed to do that exactly. And so I think it's appropriate here. The Rust in Action book teaches you both Rust and systems programming at the same time. It does that. It will introduce you to unsafe. It doesn't go through all of those explanations, but it does make you familiar with some of the reasons why you might use unsafe. And so hopefully that's not too much of an ad, but I would recommend in taking a look and checking the reviews and checking whether or not this seems like something that might be sensible for you. YouTube from Andrew Bogus, by the way, ask me if you want a Lint and Clippy. I think I want a Lint and Clippy. I think the community would love more robustness around unsafe code. And so I spent a little bit of time, and probably too much time, should have been preparing for the talk inspecting Clippy. And I originally wanted to run Linters over every public crate in Crates.io and maybe try and find out whether or not... I wanted to go and inspect the uses of unsafe to see what are people doing? Are they manipulating pointers? Are they doing some weird things like transmutation, like interpreting the bit patterns of an array as integers and so forth? But no. Oh, we've actually got quite a few... Oh, two more comments. I would... And so you say, yeah, sorry, to answer that question. Yes. Add the Lint. I'm sure that logic, the main tenor of that project, will be happy to accept it. Twitch, Lewis Code, does Rust have a way of marking unsafe code and working smoothly with it like Haskell's Ion Monad? Maybe an unsafe trait marker. That is the unsafe trait... Sorry, that's the unsafe keyword itself, I would say. You're not going... There are people who probably are type theorists and who understand Rust's type system very innately. And I'm just going to admit some ignorance there. I think that there are... You could possibly have marker traits. You could definitely create something. You could create a marker trait which implies unsafe, I believe. And I'm just thinking off the top of my head. I've never done it. I think that there are unsafe traits. And so as soon as you implement that, you mark your own type as unsafe. But I... Yeah, I'm going to express a little bit of ignorance there and just allow the type theorists in the back channel. Ask on users.rustlane.org and find out. Because again, participating in the Rust community is exactly why Rust is great. These technical reasons are fine and perfect. You can see that other language communities are adopting a lot of the... D is investigating lifetimes and C++ have lots of... Have changed their practices to be able to use smart pointers rather than raw pointers, for example. And so the technical parts of the Rust community, I think, will be adopted by other languages. What we have that's special is the participation in the community aspect. And so if there's something that's unfamiliar to you, I would strongly encourage you to ask. We... Now, Michael Ward on YouTube asks a question, why do you feel like bad about promoting my book? I'm HFDZ and it's great. Okay. I feel bad about promoting my book because people have to pay money for my book. I've been contributing open source code for like 15 years or something like this. And I just find it very difficult to ask for money from people. And I think especially from people who are learning and who don't know what is good and what is not. I don't like... You know, about 90% of people that read my book think it's excellent. No. About 80% think it's excellent. About 10% to 15% think it's very good. And about 5% want to throw it in the fire. Now, I don't want to recommend that you should buy it if you're going to throw it in the fire. And so, yeah, I don't... I don't know. It's just a personal thing. I should get better at asking for money. I'm going to give like an extra 10 to 15 seconds for me to make a decision on the most interesting question. The most interesting question actually receives a... I think Manning, my publisher has provided some codes to the organizers. If you are selected... So if you think of anything, ask a question. I'll try and include it, but I'll try and pick a winner. If you are the winner, please stay on whichever channel you have asked your question on, such that the organizers can contact you to send you the code. If you are interested... You're talking about... If you are interested in buying the book, if you go to Rustinaction.com, there's a 40% discount code on that page. Now I actually really liked the first question. What would be my advice to Rust programmers who are too happy to use Unsafe? I think that this is very interesting from like a psychological point of view, because it talks to the culture of people that are new to Rust and try and... There's a lot to this idea that bringing people to a new... There are multiple mental shifts that happen when you learn the Rust programming language. One of them is this idea that you need to trust the compiler. The compiler is not your enemy. Your compiler is actually your ally. The Rust compiler actually is on your team. I find that... I definitely think that that one, that for me is the most interesting or the most compelling question. I think it's the one that if I were to have a beer with you all after the conference, because we're going to the pub up with, right? That's the question I think I would spend a lot of time trying to flesh out and explain and understand. That is the winning question for me. Again, if... I would just like to thank the organizers based in Poland for organizing this worldwide event. I think it was fantastic. I have been delighted with... It's been fantastic presenting to you all. So thank you so much. But first, I've just got one more announcement. And that is... I think that is that there is a hackathon. If I go to RustyDays.org and I look for a hackathon link... Sorry, my internet has decided that it wants to go very slowly. Here we are. The topic for this year's hackathon is emergent phenomena. Or perhaps if you prefer, can you amaze us with simple rules? Let's create an amazing result with a simple rule set. If you've ever heard about cellular automata, fractals or similar constructs, this is what we're kind of talking about. So allow me to do one more plug. Actually, if you would like to learn about writing a fractal in Rust, go to my YouTube channel. I have a video which describes how to generate the Mandelbrot set in Rust. That can be possibly a good... And I've got some other generative art tutorials in there as well. And so in particular, this one is about creating some generative art in Rust. And we kind of work through a tutorial. So I would encourage you again to look at my YouTube channel. So that is youtube.com slash Timflix. And great! I encourage you to participate in the hackathon. It's going to be fantastic. And have a lovely weekend and a lovely evening. And I'll see you at the pub because this is the last talk of the conference. I'll see you everyone later. Thank you.
|
Is it safe to use unsafe? Learn why some projects need unsafe code and how projects manage its risks. This talk will briefly discuss what the unsafe keyword enables and what its risks are. The bulk of time will be spent discussing how projects manage those risks. It finishes by providing recommendations based on that analysis. Projects surveyed include: * Servo (Mozilla) * Fuchsia OS (Google) * fast_rsync (Dropbox) * winrt-rs (Microsoft) * Firecracker (AWS) * Linkerd2
|
10.5446/52207 (DOI)
|
All right. Welcome, everybody. Welcome to Rusty Days. Let's get started. I'm your speaker today. I am Firefox telemanger engineer on Firefox working at Mozilla. I'm also a Rust community member. Sometimes you can go scuba diving. You'll find me online and you'll find the slides online already. I'll link them later on Twitter and the chat as well. When I'm not doing any of the mentioned things, I also organize Rust conferences. I do organize the European conference Rust Fest. Just as Rusty Days, it should have happened as an in-person conference this year. It should have been in the Netherlands, but unfortunately, that didn't happen. It's still happening this year, though. It's going to be online and it's going to be in November, so stay tuned for more information on that as well. I've started looking at Rust in 2014. That's six years ago. I like to make the joke that back then, we barely just got version numbers on Rust releases. We really come a long time since then. Today, I'm here to talk about how my team at Mozilla, the telemetry team, built a cross-platform library called Glean. We deployed that to at least four different platforms and integrated that with six programming language. We still feel like we can maintain all of this. Let's start off with some background around this project. We will get to some actual Rust code later. If you're watching this on the recording afterwards, you could skip ahead, but if you're on the live stream or actually interested in all of this, please bear with me. I'm making sure this is interesting for all of you. Let's talk about Firefox telemetry. What is that? Firefox does collect data about how it's used. We collect those in performance metrics for the product that is Firefox desktop. We then bundle up these metrics into what we call PINs. That's essentially just a bundle of all these metrics that we can send out as one big thing. When doing this, we follow what we call the lead data practices. Let me get into a little bit more detail so you understand what's happening. This shows the graph of one of the metrics you collect. This metric is the time spent running the JavaScript garbage collector. The engineers behind the JavaScript engine want to know how the garbage collector actually behaves. Most of our developers on Firefox actually run some high-end laptop, a MacBook, or a ThinkPad, or something else. On their laptop, obviously, most of the time it actually runs fast. But they can't test all the hardware that's out there. They want to see the data that's coming in from actual users and how it behaves. By collecting the time the garbage collector takes to run on any arbitrary website, a user might visit. They can make decisions whether they are satisfied with the performance or not. They might be at outliers in either direction. But as you can see in this graph, it is more or less a normal distribution. I guess for them, this is what they're aiming for. This is just a single metric. But it's already useful to describe what we're doing. This is only about a technical data point inside the browser. This is not user data, but this is usage data. We don't want to collect any data about the user specifically. We only want to collect data that tells us how the browser is actually behaving. To make sure we stay true to this, we follow what we call the lean data practices. The lean data practices essentially split into three principles. The first of all is staying lean. We only want to collect the data that actually answers questions. So whenever we do want to collect data, we always start with a question we'd like to get answered. We also make decisions how and for how long we collect this data. We only need it most of the time to actually answer a question and when we have this question, we probably don't need that data anymore. The second principle is building security. As I mentioned before, we really don't want to track or collect any data about our users. We want only to see how the browser is behaving. So we don't even collect any data that would identify any users in any detail. And last but not least, we want to engage our users. Just as the Firefox code base itself, the data collection code and data about how we do all of this is freely available. Users can look at what the browser does. And on top of that, users can also look at what their browser specifically is collecting. And last but not least, and most important, we will always make it easy and possible for the user to actually opt out of this and not send us this data if they are not feeling comfortable doing so. We always follow these principles and we actually have people that ensure that every data we collect follows these principles. If you're interested more in the behavior behind all of that and the ideas behind all of that, you should check a talk by my colleague Chattin titled, Collecting Data Responsibly and at Scale. It's from last year's StarCon 2019 and it's a really good resource to describe how we do this. So to summarize a bit and to describe what telemetry is in a very internet compatible boring way is what we're actually just storing some integers and then send them JSON encoded to a server. That's about all we do. Now, when I joined Mozilla in 2018, there were two developments happening. The data teams at Mozilla decided that the current system wasn't good enough anymore and they needed to build a new system to support data collection for Mozilla products in the future. We knew about more products than just the desktop browser and we needed a system that can scale to the size of a browser population of millions of users. We also gathered feedback from the developers that need to use this telemetry collection and also the analysts that need to finally look at the data. The outcome of all of this is GLEAN, which is the new development going all the way from the SDK, that's the code that's actually landing in the products, over to the pipeline which ingests the data, puts it in the database and maybe transforms it to some extent and then also the tooling to analyze and look at this data. This is all summed up in the CLEAN project. If you're interested in a lot more details there, check out the introducing GLEAN blog post by Georg. The second thing that was also happening is that our mobile teams decided to build out a new version of the Firefox for Android browser. They wanted to build this new version of the browser on modern principles in a modular approach using the existing Firefox code base, using the existing modern Firefox code base to be precise. They also knew that they wanted to have data about how this product or this browser behaves on actual end user phones. The landscape of Android is even wider than the hardware landscape for mainstream operating systems on the desktop. It's even more out of scope to test it on all those platforms. But if we can get actual performance and usage data of the browser in the wild, then we can make decisions what works and what doesn't. Now, let's take a look into what the current telemetry API as used in Firefox desktop looks like. This is a simple function call. All it does is it increments a counter that's identified by some name and increments that essentially by one. Now, the first thing you see is the function, the scalar add, which is called on some global telemetry object. Scalar is our naming of just a single value that can be changed. Second, you have a string that's used as the identifier for what we call the metric. That's the underlying data point that we actually want to collect. In this case, it's browser engagement, max concurrent type count. In this case, the name, to some extent, already describes what this data should be. And then we increment it by one. Now, you might wonder where does this data actually come from? And there is actually one single source of truce in the Firefox code base where this data is put in. That's scalars.yaml. Scalars.yaml is a definition file that holds the identifier and a lot more metadata about this metric. It's organized into category and names, whereas in this case, browser engagement is actually the category and then our name is my concurrent type count. It has more data on there. For example, it lists the numbers of the bugs that implemented or changed this metric. It has a description so that everyone coming by can look up what this metric actually does. And at best, they don't need to know all the surrounding code where this data is actually collected to get an understanding of what they should be seeing. Then we have this expires field, which actually tells us that this data won't be collected beyond Firefox version 81. As said before, sometimes, or most often, we don't actually want to collect the data for forever. Instead, we stop at a certain version to collect this data and by then, whoever is responsible for this needs to make a decision about the data and how it answers the question. And last but not least, there's also some form of owner in this definition file. We have always at least someone assigned that's responsible for this metric. So if anything goes wrong there or someone needs more information around this data or needs to find some analysis that happened, they can look up who owns this metric and can talk to that person directly. And sometimes, that's not a single person. Sometimes, that's full team. So how would a developer go from this block of definition in the YAML file to this API call? There's a couple of things that over the years, we figured are actually not nice for the developers anymore. First of all, we're passing in some opaque string that's used as an identifier. But suddenly, there are underscores and dots and it's not really clear what's the pattern behind that. In the definition file, browser engagement was actually done with a dot, but here it's underscore, but then there's a dot between the category and the name. And all of this is very opaque to the developer that needs to use this. Also, if they mistyped this name, there's no indication that something doesn't work. They need to actually run this code, check that telemetry is recorded by doing the action there recording. And also, what if this is not a counter? What function do they need to call? Scalarad only makes sense for counters. So just from this idea, we can get to a point where we can design a nicer API already. So this is the API that we're aiming for. Let's split this up a little bit. We have browser engagement dot max concurrent top count dot add with a one. Now, the browser engagement, that's a category and that's an actual object available in the source code. And the counter metric is now identified by an actual field on this object. And as we know, this is a counter. We also know what methods a counter supports. In our case, that's add because we only can increment this counter. And incrementing works with an amount so we can pass it a positive integer. And because we have this definition file, we have all of this information about the metric readily available before compile time. So we can do stuff with it to expose all of this information to the developer. If we get to this API, what works automatically is tap completion for browser engagement and from xcarn tap count. What also works is tap completion for the right method that you can call on this. And you automatically get a type check for what needs to be passed to this method. So this is what we're aiming for. So let's sum up the telemetry requirements for this new system. First of all, we want to keep this declarative definition of the metrics. This scaler is the ML file that we had. It should still exist in some form. We might even need to extend it a little bit. We also know that we first aimed for Android. So we needed to have this available in Kotlin. But the other thing we knew is that if we built the system, it needs to work on multiple platforms at some point because we also want to get that into our other products. And that could be iOS. That's our desktop browser shipped on multiple operating systems and potentially other applications as well. And to not redo all the work, all of this should be bundled in a single core implementation that can be used cross-platform. And last but not least, we want an ergonomic API. And as I said before, we're targeting different platforms, which also means we're targeting different programming languages in which those products are implemented. And we want essentially the same nice API, but with the right feeling per the language we're implementing it for. Now, Glean is the summary implementation of all of this. Glean is the project that we built with a Rust core library that extends to multiple platforms, which we know all support. So let's dive into the Glean SDK stack. This is a very rough overview of what the Glean SDK looks like. On the top, you have the different app implementations. You have an Android app, you have an iOS app, and potentially other ones that I didn't even draw yet on this screen. Each of these apps uses the Glean SDK as a dependency and calls into it. And the top layer of the Glean SDK is the language implementation. That's either Glean Kotlin, which provides us with Kotlin bindings and also Java bindings. Then there's Glean Swift, which gives us a Swift API on top of all of this. And then we have a couple of more. Just below that, we have the Glean FFI. That's the connecting layer between the high level dynamic language for the app and the lower level actual implementation of Glean. On the side, we have this little tool called Glean parser, which is a little connection between the definition file, as seen before, that generates code so the Android and iOS developers can actually benefit from having this information laid out as objects and available in the editors. So let's look at the lowest layer first, Glean Core. Glean Core is a plain old rust grid. It contains some structs and those structs hold some state about how Glean works. It holds the database. It knows how to write and read from the database. It keeps track of some internal metrics, so we not only measure the application, but we can also measure Glean itself and provide some metrics that are used in essentially every implementation as well. Now, the advantage of this is this is really just a rust grid. All the rust tooling works. We can rust. We can test this by just running cargo test, and it works. It works on every normal operating system, be that MacOS, Windows, or Linux. And that makes development of this part really, really comfortable for us. We can rely on all the nice things rust provides for us. We can rely on its guarantees. We can use the ecosystem and use other crates available. We also implement all the metric types we support. This is essentially a very simplified version of the counter we talked about before. The counter has an add function with an amount. And all it needs to do is look into the database and increment whatever is in there. We have some other metrics that are a bit more complex, but most of the logic for them is still implemented in rust, which allows us to test them individually. Now, just above this Glean core implementation is the Glean FFI. This, again, is the connection layer between the Kotlin implementation and the lower core layer. FFI stands for foreign function interface. Rust from the beginning of becoming a thing was able to interface with other programming languages through this foreign function interface, FFI. For simplicity, the FFI is essentially a deterministic and simple naming of symbols, such as functions, as you have them. And it's also the C-compatible ABI, that's the application binary interface, which essentially describes how things are laid out in memory and how functions need to get cold and where to pass parameters into those functions. Rust is able to cross the FFI in both directions. The first is calling C functions. That's when you want to integrate an existing external C library and use that to do your work. The way you do this is you need to copy and convert the declarations of all the function calls that you know from this library into its Rust equivalent. You then put them in this external block and then also need to swap around the types so that you're not using the C types but using their equivalents in Rust. And then you can just call this as if it is a normal Rust function. During the build, this all hopefully gets linked together properly and you get a working library out of this. Of course, this is all very, very unsafe in terms of Rust defines unsafe. Because you're crossing over in C-land, you can't rely on all the nice things that Rust provides you, such as ownership and boring, lifetimes, or even the layout of certain data things in memory. Now, the other way works just as well. You can get cold from C as well. For this, you need a little bit more annotations to make it all work. The first one is, if you see on the slide here, is a Rust implementation of a function. We're going to be able to call from C. The very first thing you see is the no-mangle attribute. This attribute tells the compiler to use the function name as is and place it into the final library. If you don't have this, the compiler will actually change the name to encode a little bit more information into the name of the function. But to be callable from C, you need the plain name. The next thing that you see is this extern C annotation. This tells the compiler what ABI it should expect for this function. And in this case, the ABI describes how the parameters, arguments are passed into the function and how you return data from this function, whether that's through the stack or certain registers. Now, the C ABI is much less expressive in what it can do compared to what Rust is able to do. So you will always have the case that you need to convert into something that the C side can actually understand. And you can always see on this little example one of the cases where that comes out. And that's strings. In C land, strings are always encoded with a null byte at the end. Whereas in Rust land, strings always carry the length. So you need to use the C ABI to convert back and forth between those. Luckily, Rust has tooling for that. You can just use the C string, which is essentially a wrapper around the Rust string, which appends the null byte and gives you a pointer to it. Next up, to make this all work, you actually also need the declaration of your function in C. So C knows what to call and the compiler knows how to do this. Usually you would write this somewhere in a code or provide a header file, but there's a tool that does this for you. C Bynjan can create the C header automatically if you expose a public C API. So again, on the left side, you have the Rust code, which just implements a Rust function that is exposed as a C API function. And if you throw this again, C Bynjan, you get the output on the right. And that's the declaration in C syntax. This is really, really a nice tool, especially once you grow beyond just a single function. So you don't need to type it out all the time, and you definitely don't make mistakes when typing it out. One more thing that we use in Glean is the create called FFI support. FFI support is a small library that helps us simplify implementing all these FFI stuff. It's not written by us, but by another team at Mozilla, the application services team. They do essentially similar things as we do, writing Rust libraries and shipping them to mobile platforms. They came up with FFI support, and we are very happy users and contributors to this create as well. One thing FFI support gives us is the into FFI trade. The into FFI trade is a mechanism to express how to convert Rust types into FFI compatible types. With other few little things in the code, this allows us to basically write Rust code and then have the conversion to a C compatible type done automatically whenever we pass this through our FFI functions. I said before there's C string that does this for strings, and FFI essentially reuses that for strings, but it implements it for other types as well. Another thing FFI support brings us is FFI string. I mentioned strings a couple of times now as now terminated list of bytes. What FFI string gives us essentially is a save wrapper around this. When we get a string passed in, we have a Rust type we can work with. We can turn this into an actual Rust string or read data out of it. FFI string also adds a lifetime to what we get in, so we can rely on the compiler to tell us if we misuse this data. We cannot, for example, store this data anywhere. If we want to do this, we need to allocate it into an actual Rust string and store that away. The C side could just remove this data after this function returns, so we can't rely on it. The third thing we get from FFI support is the concurrent handle map. This is essentially a locked map that gives us handles to the data we insert. These handles can be expressed as simple integers, and simple integers we can just pass back over the FFI. When we get back such an handle, we can get back to the object that is saved under this handle, and we can also ensure that the handle we got actually maps to the type that we expect. If we would use pointers to any of the data, the other side could easily screw up the pointer handling or pass pointers in the one place, and we would never know until we actually see a crash. By using a concurrent handle map here, we get another benefit. This is all behind a lock, so even if we get called from a multi-threaded application, we are automatically thread safe simply because our data is behind a lock, and we never allow two invocations to run on that data at the same time. Now that we talked a lot about the FFI, let's talk about what the FFI compiles to. Let's talk about compile targets. Most people will probably write compile and run the REST code on their own machine. You have your laptop, you type your REST code, you type CoggerRun, it builds and runs the code. But sometimes you probably also want to compile to that other machine over there. And as it turns out, you're programming on Windows, but that machine over there is Linux because it's your server. So how do you do this? That part is called cross-compiling. Now luckily the REST compiler is a cross-compiler by default. It knows how to compile for other targets. If you're using Rust app, you can just type Rust-ARP-TargetList and you get a list of all the targets that Rust actually knows. Now these are over 80 targets that Rust app can already deliver to you. Now why are some installed and why not? Well there's still some pre-compiled libraries like the standard library provided for all these targets. So you just need to download them first and then you can build your code. Now what is a target you might ask? Well it's essentially the combination of the architecture, the operating system and the ABI you want. Now sometimes there's an operating system, so it's just unknown. And most of the time the ABI actually specifies what libc you're compiling against, which then makes certain assumptions about the ABI. The target is expressed as a triple. Now the triple has three to five components as you see here, and depending who you ask. And I'm not totally ignoring the off by one or more error here. Now this is the list of the ten targets the Glean SDK actually compiles for. We have four targets for Android alone, for both ARM and x86, and one of them is actually the x86 emulator or simulator. Then you have your default targets for the mainstream operating systems, Linux, MacOS and Windows. And we also have two IRS targets where the first one is the ARM target that's the actual devices, and then you have the x86 target, which is the simulator running on the Mac. Now there's one little detail that I basically skipped over so far in that Rust is a cross compiler, yes, but just by downloading those, it can compile for the target you're specifying, but it will most likely not produce a workable library or binary for you to use. And the reason is Rust has no idea about the linker and the additional libraries that any of these targets need. So you most likely will need some more setup to get this all working. We brought this down for our part, and it's actually not too complicated anymore, but it's certainly a thing that you need to think about. We compile for these targets on CI, so every release build is definitely built for all these targets, and that gets a bit hairy to get it all working, but once you get it there, you can mostly rely on it. Next up, we're going to look at the Glean-Cotton implementation. I explained the cork rate, and I explained how the FFI provides a layer on top of that. Now we're looking at the implementation that application is going to use, and we are specifically going to look at the Kotlin implementation. I'm going to make some comments about the others afterwards. Now, how do you make Kotlin talk with a C API, which is provided by the Sclean-FFI? If you look that up, the first thing you find is JNI. JNI, and here I quote, is the Java-native interface. It defines a way for the bytecode that Android compiles from managed code to interact with native code. Managed code here means Kotlin or Java, and native code here means anything that exposes the C API, that could be C for Assets Rust. Now, JNI is a bit special. There's a crate out there called JNI, which provides you a lot of the things around this to make this all work, but let's look at a Hello World example for JNI. On the left, you see the Rust code that's necessary to make JNI work. Again, you have this no-mangle attribute to tell the compiler not to mix up the name. Then you have an extern system declaration. That's essentially extern C with a little bit of other details on some platforms. And then you have the function name. And the function name here is java underscore hello world underscore hello. And that's already kind of awkward. You need to encode actual information into this type so the JNI site can find this. On the right side, you see the Java code that is able to call the native code here. You have some class Hello World. You define that there is a static method called Hello with string input and output, and then you also need to load the library. If you see both sides now, you see that the Java class Hello World is actually what's encoded into the function name on the Rust site. Now, I don't want to write code like this, and I certainly didn't. But if you're interested in this, definitely check out Otavius' talk from last year's What's Let Him about how they wrote a project that interprets with Android, iOS, and WebAssembly. Now, as I said, I don't want to write code like this, and I certainly didn't. So what are the ways to do this? To repeat, we wanted to connect Kotlin with the underlying FFI library that's written in Rust. And the next project you're stumbling across if you look this up is JNA. JNA is the Java native access. And I quote again here, JNA provides Java programs easy access to native shared libraries without outriding anything but Java code. No JNA or native code is required. Again, Java here you can replace with Kotlin the same applies. We can talk to some native code that's C or Rust or whatever looks like C and can call this. So let's look how the Hello World for this looks like. And this is the Hello World code. That's essentially the same code I've shown before. We have no manual attribute, we have Xtern C. We define our function with its normal name and then we pass around what's essentially C types. On the Kotlin site, we need to load this library. And it looks like a little bit more code, but just it's not really. You just need to tell it the name of the library that you're loading. Then you need to list out all the declarations of the functions that are available on this library. And later, as seen in the last line, you can call this as if it was pure Kotlin code. Under the hood, JNA takes care of passing things back and forth over the FFI. Now to fit this all together, there's one more thing we need. We need to integrate this into the build system. Android applications use Gradle as their build system most of the time. Now, luckily for me and my team, we didn't need to write anything to make this all happen. Others figured this out before us. As I said, the application service team was a bit quicker in building the project. So they built the Rust Android Gradle plugin that essentially gives us what you see here on the slide. We can add this little cargo block into our build files. We give it the name of what we're trying to build and where to find the code. And we also tell it the targets it needs to compile for. Now, whenever we run a full Gradle build of our library, this plugin under the hood invokes cargo, finds the right tool chains and compilers and linkers for the Android platforms. It does that through the Android SDK and NDK as you've installed it. Invokes cargo compiles that, copies the generated library into some place that your Kotlin code will actually be able to load it. So that was the Kotlin implementation. Let's briefly look into other Glean implementations. The very first one, and that's on the slide, is Swift. Swift was comparably easy to the Kotlin implementation because Swift actually speaks C. So what we do is we use C bind gen to generate a C header. Then we put that all into the build system, and that's actually the most complicated part of it. And then we are able to call the Rust side from Swift. There's a bit of translation sometimes needed between different data types, but all in all, it's not that much magic after all. The next implementation we had was Python. Python again is pretty similar conceptually to the Swift implementation. We're using CFFI, a Python library that can also load the C header. And at runtime, it then loads the dynamic library and provides us access to the methods that we defined in the C header. There's a little bit more involved to convert between Python types and C types, simply because Python has very different layout of types than C expects them. But with some tooling and utilities, that's not actually too much work, except needing to write all of this. Now, the newest addition in the family of language implementations of the Glean SDK is C sharp. Implementation-wise, this is pretty similar to Kotlin. We list out all the declarations of all the FFI functions converted to C sharp, replace some of the C types with their equivalents in C sharp, then we load the generated library at runtime and can call methods on it. Note, there are more implementations actually coming up that we're working on. And one of the first ones is C++. We're bringing back Glean into Firefox desktop, and Firefox itself is still mostly written in C++. So that's where we need to provide an API for. Firefox is also written in JavaScript to a large extent, so we're also going to have a JavaScript API soon. Because of how Firefox on desktop works, there's a slight shift in how we need to design this, and a lot of the parts are already in Firefox to make this all happen. So if you're using Firefox nightly, you will soon actually use Glean as the telemetry system inside Firefox, as one of the telemetry systems. And last but not least, we're actually working on a Rust API. Now you might wonder, wait, you're writing Rust, why don't you have a Rust API? The reason is GleanCore is written in a way that the Glean.ffi can expose all its functionality. But we didn't bother yet to provide a nice API on top of all of this to make it usable for us users. We simply didn't have anyone that needed that. But now we have the first request to actually provide a Rust API, not only in Firefox desktop, but also outside. So we're going to work on a Rust API that's usable simply as a Rust create. Now, I'd like to, now with the implementations out of the way, I'd like to jump over in some of the challenges we were facing when developing a cross-platform library. Not everything there went smoothly and there were a lot of things we had to figure out. The very first thing, and that came up a lot already, is data types. The data types Rust knows the data types that are expressed in C, and the data types that are available in all the other languages we're targeting, they are very different. So we always need to convert between those. Now, the first one we need to look at is numbers. These are actually pretty simple. Numbers, the number types in Rust all have their bit width defined, and there are equivalents in both Kotlin and Swift and in the other languages we implement as well. A little bit of care needs to be taken for iSize and uSize, which are platform-defined size types. They do have the equivalents in both Kotlin and Swift and the other languages, but you need to make sure that you're actually using those when passing data over the FFI in any of these types. One little knit there as well, Kotlin and Java don't really have unsigned integer types. So what we differ to is we just mostly use sign types whenever we can, so we don't get a mismatch between the sites. There's experimental support for unsigned integers in Kotlin, so you could use that and rely on that as well. The next type we need to look at is a bool. A bool is essentially one bit of information. It's either true expressed as a one or it's false expressed as a zero. That's one bit. Now, the size of a bool in Rust is actually 8-bit, one byte. The reason behind that is we can't really address smaller than one byte. So that's not too bad. Where it goes bad or it goes wrong is when we try to interface with Kotlin. Because the way Kotlin sees bools is, well, 32-bit. That's four bytes. That's not only a waste of memory space that we're using up for a simple one bit of information. This is also a complete mismatch between the two sites. So we can't pass over bools over the FFI. What we do is we always convert them to U8 or byte on either site and then just compare whether we got a one or zero to get out a bool again. The next thing are strings. We use strings heavily. If you use strings as parameters to your FFI functions, you're in luck. JNA and also the implementations on other platforms essentially are able to take their version of a string and convert that into a C string. So if you pass over Kotlin string into the FFI, under the hood, JNA allocates new block of memory for that string, copies over the string UTF-8 encoded, if you tell it so, adds the null byte and then passes over a pointer to this allocated data. After the function call, it will actually also make sure this data is deleted again. So what you get on the Rust side is this FFI string that we can then deal with. And yes, that always includes this double allocation. And that's a bit unfortunate, but for us, that's not really a performance bottleneck at all. The next thing is getting out strings, returning them from your FFI. That's a bit more complicated overall. The way we do this is we allocate a Rust string inside the Rust layer. We then turn that into a C string using which adds the null byte, and then we return a pointer to this data and also need to make sure we're not deallocating that data. On the Kotlin side, we get this pointer and then need to get the data out. Semi-atility functions again make this really easy for us. First of all, on the bottom, you see getRustString. This just reads out the null terminated string from the pointer, reads it as UTF-8 encoded data, and turns that into a Kotlin string. On the top is the function that developers would actually or that we actually use in the Kotlin bindings, getEnconsumerString. The little detail you need to know here is that you also need to ensure that the allocation on the Rust side is freed again. So when we copied out the data, we tell the FFI to just deallocate the just allocated string again. Now again, we have this double allocation where Rust allocates, then Kotlin allocates, copies over data, and then tells Rust to deallocate again. So there's a bit of overhead there. We use strings as return values very sparingly, so it's not a big deal for us. The next thing we use is enums. Planoenums are just a list of the variants they have. Essentially, each variant has an integer representation, and for now, we're actually just using this integer representation. We convert it to the integer and turn it back into the enum on the Rust side. And there's one little detail that makes this a little bit of a hassle and that we need to ensure that both the Kotlin side and the Rust side agree exactly about the order of variants in the enum and also the values of the enum. Luckily, if you don't tell them otherwise, they will both start counting at zero, so you can pretty much rely on that. We need to do this translation all manually, but this is certainly a part which we could automate eventually. Now, Rust actually has another enum type. Well, it's not another type, but it's an extension of enums. Enums can carry data. By this, you're allowed to have different variants, carry different data, and the whole thing is still a single type. On the left, you see a somewhat weird-looking enum in Rust because this actually includes some C-compatible types. This is the enum we use on the FFI layer. And there's one little annotation on top of that. They represent U8. This is specified in Rust, and what it does, it turns this enum when converted into a C type into essentially a tag union and uses a tag that's represented as a U8. That's one byte. This tag then essentially just encodes which variant the union has. That is one of the three variants in this example. On the right side, you see what this tag union would look like in C. This is what C-bindgen actually generates from this representation of the enum. At first, you get the different tags. Then you see that the tag is actually encoded as the U8, which is the 8-bit integer. Then for each variant that has data, we have their own struct. In this case, we have the uploadBuddy, which contains the document ID and the body that you also see in the Rust code. But at the very first field, it contains this tag. Last in this sample of code, you see the union. Now, C-bindgen is actually smart enough to see that neither rate nor done have additional data, so all the information you need to carry around for them is the tag. They don't need their own structure. Upload, on the other hand, does need the structure. Making this a union means that the union simply has the size of the biggest of its variants, which in this case is the struct. But because the tag is always the first variable in this whole thing, you can always read out the tag and get valid data back. And if it points to upload, you are allowed to read the uploadBuddy. Now, this is the C version. Luckily, we can translate that into Kotlin quite easily. JNA, the library we are using, has implementation of both structure and union, so with a little bit of boilerplate that we type out, we essentially replicate this tag union in Kotlin. There's one little knit again. This is mostly a manual process, translating the Rust code on the left to the Kotlin code on the right. And that is a little bit annoying. And if you get this wrong, you might accidentally read uninitialized memory. And that's not good. So you need to make double-trure that both sides agree on how the data looks like. This is also a part where I'd like to see this just being automated. I'm pretty sure it would be possible. If you actually want to pass over other data and return this from your FFI, one thing we do is we simply encode this as JSON. On the Rust side, we already heavily rely on certit to do some serialization for us. So adding JSON serialization on the FFI was pretty simple. We serialize our data into JSON, turn that JSON into a C string, and return that C string. On the Kotlin side, we read the string, read it out as a Kotlin string, then parse the JSON, and can then look at the JSON data. Again, this works for us because we only use this for our test functions. So we are first fine with the performance it provides us. And we're also fine with the little bit of overhead it has on us as developers where we need to ensure the JSON representation is seen the same on both sides, both in Rust and Kotlin. And the last thing is you could just drop all this and actually just use protobuf. That's what the application services team is using for much richer data they need to return. The advantage is they can define their data layout once and then have the protobuf compiler generate both Rust code and Kotlin code. Then their FFI simply serializes the data into protobuf, returns that over the FFI, and the Kotlin side deserializes that using the generated code and has an actual Kotlin object to work with. This not only gives you the safer translation where you don't need to do this all manually, it also is faster than parsing JSON. Now, enough with the data types, let's look at a couple more things that we stumbled upon while implementing this. The first off is the optimizer on the Android side. R8 is a new optimizer and minifier written by Google. That's essentially a successor to the pretty well-known pro-guard tool in the Android world. R8 essentially takes the JVM bytecode, then minifies and optimizes this code. For a lot of application, this is very essential. The optimizations it can do are very impressive and really speed up certain applications a lot. The problem is, it's pretty buggy in some places and it actually might over-optimize JNA code. There's a lot of things inside JNA and R8 can't look through all of these. Sometimes it just decides that it thinks some of this code is not needed, throw it away because that's an optimization, but then your final application doesn't run anymore. There are a few things that you can do to prevent this. First of all, JNA actually tells you to include certain rules that ensure when R8 runs, it does not touch the JNA parts. Additionally, you probably want to tell it to not touch your own FFI code either. That's what the last rule does for us. There still might be bugs and we stumbled up on a big one. We found a workaround for that and we're hoping this bug actually gets fixed. This is definitely a part that cost me a lot of time invested into trying to fix this. One more thing I'd like to talk about is extra libraries that you need to include in your build. We're very lucky that we need exactly one external library and that external library is compiled as part of CargoBuild. That's statically compiled into our library through a build.rs file. That luckily just works. The problem is build.rs is just pure Rust code. Anyone could do anything there. It's used to compile external libraries if needed, but everyone does it differently. Some library wrappers just look for the library in your system. That mostly doesn't work if you're cross-compiling. They might try to download this library and then compile it and then link it, but there's still the difference whether they dynamically link it or they statically link it. My recommendation is if you actually need to do this, ensure that you're building and linking your C dependencies statically so they all end up in your final library. Otherwise, you also need to ship all these C dependencies compiled. The second tip is consider pre-compiling the dependencies you have and ship them to your developers. Make them available somehow, especially if you have large dependencies. Not everyone has the full setup to also compile these. Sometimes it can be a real hassle, but your CI system probably already knows how to compile for all these different platforms that your developers are also using. Just pre-compile them, make them available to the developers, and integrate that somehow into your build. One thing you might now ask, what about the platform? So far, we've only seen it talk one direction. That's because it really only goes one direction. We're always calling from Kotlin into Rust. We return data on this function call, but there's never a case where from the Rust side, we call back into Kotlin to invoke any platform behavior. I know that it can work and the JN I create that I presented earlier is able to provide you the functionality to call Java or Kotlin code. So you could use that. But we don't do this, so I haven't looked much further into this. On the other hand, we do rely on Kotlin to do things for us and pass us that data. So some things that Kotlin actually does for us are getting the data storage path because this differs on how it works on Android where there's iOS and certainly is completely different on desktop again. It also gets us some information about the system and the application, like version numbers and operating system versions and so on. We also rely on Kotlin to do the HTTP or network communication for us. We essentially tell the Kotlin side, here's data, now use HTTP to upload it to this server. And the last thing we rely on for Kotlin is time. The time sources the Kotlin APIs use and the time sources that the REST APIs use are slightly different and we haven't yet invested the time to simply move that over to REST and use the right time sources there. So before I wrap this all up, I want to look a little bit ahead. For us, the future is certainly Glean. We deployed Glean on the new Firefox Android browser and it's an active use there. It's in a lot of different products at Mozilla already and we're currently working on getting it into more to replace the old telemetry systems we have there. One thing I am looking forward to because that was on my to-do list ever since we started is some form of reducing the overhead we need to invest to write all this boilerplate to support all these platforms. Luckily, I'm not alone in this and other people just do stuff faster than I do and there is now a project called Unifvi that can create this boilerplate. Essentially, you can use IDL files that's interface definition language files where you describe your interface, what it should look like, and Unifvi will be able to take this, generate the REST part, so you can implement it, generate the FFI part so it can get called, and also generate the Kotlin part so you actually have the integration into the platform. It's not there yet, but this is something I definitely want to help out developing in the next few months. Then I talked about it a couple of times, we're bringing Glean back into Firefox and desktop. We call this Project Fock. We're currently implementing and designing the APIs we want to see there, like C++ and JavaScript, and also the REST API will be used there. That's very exciting and we hope to get this into nightly and actually working in the next few months as well. One thing more that I wanted to achieve with this talk is, for one, putting out this knowledge, we're doing this, rebuilding cross-platform libraries targeting mobile, but I also want to reach out and see who else is doing this. The documentation around all of this is a little bit sparse. I'd like to see this getting talked about more, more tools being developed for it, more documentation written around all of this. As one last thing, because I find this highly interesting and just to plant a seed in your head, some of these platforms do have ways to do async programming and now REST has async programming. Is there a way we can combine those two things and do async REST programming but running it on the platform side? Thanks to my team at Mozilla, Alessio, Bea, Travis, Mike, and also Georg, who are working with me on this project, and also the application services team that paved the way in a lot of places for us and who are still in contact with us to further develop these ideas. You find the slides online. You find all of the clean code base in the Mozilla slash clean repository in GitHub, as well as our docs, and you can read about Glean and how we develop it on our blog. You find me on Twitter, and now I have time for questions. Thank you. All right, I'm getting some questions in. The first is from Dawey on Twitch. How little Kotlin or Java must run right so as to maximize use of Rust in mobile development? So that's a bit hard for me to describe precisely because a few things that I left out is that our first implementation was actually in Kotlin, and we slowly migrated parts of it to the Rust site and using the Rust implementation. So we already had a lot of Kotlin that we just migrated. I'd say you also need to look at how much you actually depend on using Rust. For us, it was clear that we need to share this implementation and that we can put a lot of logic into this Rust implementation, whereas the Kotlin parts are pretty much stateless. And I think that's where it shines really well. If you know that you need to do a lot of interaction between those two sites, it might not be optimal anymore. So how little or how much you need to write, hard to say. Next, also from Twitch, from Raritly, is Kotlin native multi-platform at all useful in interacting with FFI? I don't know. I have not looked at Kotlin native. So Kotlin native is a way of compiling Kotlin to actual native code and not to JVM bytecode. I have not looked at this, so I don't know how to interact with FFI. What I do know is that it would not have been a possibility to just use Kotlin and get that back into the Firefox desktop codebase, because that would have been yet another language that we would need to support, and that would not have flown with the Firefox development team. Another question from Twitch, Dowee again. To what extent have you needed to put the deliberate effort for mobile FFI development into keeping Rust objects alive for longer than Rust would typically keep them alive? So there's two parts where we need to do this. We essentially, Glean is a global singleton, so we simply put that in a global static. That's behind a lock to be kept alive, so it essentially lives for forever. That would have been the case if we not done any FFI either. It's just that we need this global singleton to get to the API we want to. The other thing where we need to ensure things live are those metric types that we create on the Kotlin side through the FFI, but there we essentially rely on this concurrent handle map from FFI support. I'm sorry, the concurrent handle map. And that's also just a global static where it puts in the data, which ensures it lives until we delete it again. The place where we need to do some effort, and again that's wrapped around nice types for us, is returning strings. If you return strings, you need to make sure that you allocate the data, then forget about it, but return the pointer so Kotlin can read it. That's the one place where you need to do this. If you're not using concurrent handle map, you would need to do this for all your other objects. Follow up, Kristen Von Dowey on Twitch again. Would it be more performant to use bytes rather than a string for exchanging JSON serialized data? Yes, it would because we wouldn't need to do the UTF-8 check. Then again, JSON is UTF-8 data, so at some point this would still come up. We could use one less check there, but again, we don't care about the performance there much. It's only returned from test functions, so it will never land in the application. Arian on Twitch asks, where does Rust shine more than any other language in the industry? Oh, that puts me in a corner. I need to be very careful here. Where it really shines for us is simply all the safety it brings us, the fact that we can simply rely on being multi-threaded by default. We don't need to take specific care of that. It just works and the compiler will stop us if it doesn't. I think that's one of the major places where it shines, especially in this context. We can't fully control the Kotlin or Swift or Python site, and that might be multi-threaded. We need to ensure that the internals in our Rust create are thread safe. The other thing is the tooling. Cargo is just absolutely amazing, and for us it was very easy to integrate that, well, not very easy. It was considerably easy to integrate that into the other build sims as we needed it. Another tooling is just excellent as well in Cargo. This is where I want to see UniFFI going. UniFFI should be one of those tools that you can just use and it just works. That's where Rust is really excellent already. A comment on YouTube from Jeff, do you think it would be possible for Glean to be extended or built up on to work with distributed tracing? Seems like having a similar potable interface like this would be great. I haven't yet looked at tracing too close. I think you're asking about the tracing crate, I assume. A very new, properly-out-crate in the Rust ecosystem. What I can say, though, is that our telemetry is slightly different from what tracing currently provides. We're not so much tracing across code blocks, but we're more gathering single points of data collected. In my examples in this talk, it was mostly counters, but what's much more used is distributions of time, distributions of memory. Sometimes it's lists of strings or similar things. I don't think tracing has APIs similar to that. Implementation-wise, I haven't looked at it at all, so I can't speak about that. One more question from Ariane on Twitch. Python for simplicity and data science, C++ for faster computation and low-level stuff, and JavaScript for web, so where do you think Rust is an epic winner? Actually, in all of these, as shown, we are using Rust and interact with all these languages. I write C++ still as my main job requires that I still write JavaScript because Firefox is also written in JavaScript, and I do write JavaScript for the web sometimes as well. And I interact with Python a lot, not only because half our build system now uses that, but also, as you said, it's used in data science a lot. And I think Rust positions itself very well to be the language behind the scenes. It can be used to implement all these things that need to be fast in the background, and the other languages can provide the nice user API. Well, C++ is the one case where they basically compare speed-wise, but the other languages not necessarily. So I think this being the language behind the scenes, essentially being the systems running the language behind all these things, that's where Rust fits right in. So that's about the questions I have at the moment. I give you all a couple more minutes to ask me anything. All right, there's one more question from ScriptedFate. That's more of a comment, but I'm still going to read it because it's essentially one of the answers to the earlier questions. There's a work on open tracing, and yeah, for context, ScriptedFate is one of my colleagues. It's not unreasonable to think of an evolution in that direction, but for now, Glean is focused on higher-level metrics, not traces. Yeah, so this essentially summed it up. Currently, we're focusing a lot on the higher-level metrics, so I use Counter because that's the easiest, but we try to come up with the metrics in a way that the metric itself has semantic meaning for what it wants to collect. As I said before, we have these distributions, but our distributions are specific to either be timing distributions, which means they measure a time span or memory distributions, which means they measure some form of memory that could be allocation, that could be disk space use, or memory used for the JavaScript engine or sorts like that. We're also getting new types to measure rates that is useful where we need to compare certain things that happen on a website, where we know that whether something is available and whether something is actually used, and we have a whole process to build out these types. Another question from Trich and Dowie, and that's a pretty good one. Why do we use Kotlin for the network communication rather than Rust? The reason is that, so there are multiple reasons actually. So in Firefox for Android, the Android browser we're building, we're actually using Gecko, that's the same engine that's used in Firefox desktop, and that engine brings its own HTTP stack. That's the whole implementation of the network communication is done. And on Android, we actually want to use this implementation of the network stack. So we need to pass it back through Kotlin. And then the fun fact in Firefox for Android, the data actually gets passed back into Gecko, which is the C++ implementation that then does the network communication for us. We can implement this in Rust as well, but again, on Firefox for Android we wouldn't. We could do this on other platforms, but we still want to leverage simply the existing network stacks that are there. Android has a network stack, and that's deeply integrated with the system, which also allows you to schedule uploads the same on Swift. You can schedule uploads, and the system actually does it for you without you needing to control everything. That actually also allows for upload in the background. And it's the same on other platforms. We can't necessarily rely on a new network stack implementation in Rust because the system might use something different to begin with. Flucky on Twitch asks, do you see Glean having use cases in other products? Are there any others who are using it or liquefying into using it? Is there a plan to make it friendlier for others and drive outside adoption? To all of this I can answer, I guess. Glean was developed at Mozilla with the specific needs that we have, but we developed it in a way that the implementation can essentially talk with any pipeline that adheres to the same things. If you keep an eye on our blog, there will soon be a blog post about how you can use Glean and not use the Mozilla pipeline to get to the data. So that's actually one of the things we want to do. We want to turn Glean into something that others could use as well to profit from the same things we are already putting into Glean. Another question from Jeff on YouTube. Do you have a favorite rust crate for working with errors? I do not. I hear this error is pretty good. I actually ripped out all usage of failure in our code base and part of our dependency tree because the overhead introduced was bigger than just implementing the enums ourselves and then implementing error. The reason we don't really depend on any error crate for this is the APIs we provide actually don't report errors. We went to great length to ensure that under correct use, and like most of it is guaranteed by rust, you can't run into errors. The errors you can run into are all handled internally. If we can't write to the database for whatever reason, we still don't want to crash any application that we are integrating with. If you can't write to the database, then so be it, the application still runs. If however we try to write into a metric type and some of the types don't match or some integer overflows or something like that, we actually use our own system to report these errors so we see them in the data. If anything is wrong with the data we are trying to record, we record an error in our own data and then we see it. We are never exposing any errors from the Rust code over the FFI, so we don't need such a crate at the moment. I got one on Twitch asks, what do you not like about Rust? I've been doing Rust for so long, there are certainly things that are not perfect. All in all, it's still more bearable than other languages that I'm working with. One thing and for all the greatness that cargo is, it cost me quite some headache because of some things it does. Essentially, the way features are handled still has a lot of problems that didn't really cause problems directly for Glean, but integrating Glean into a project as large as Firefox causes problems because of the way cargo uses features. Luckily for me, this is actually fixed and hopefully getting stabilized with the next cargo release. One of the pitpeas I have with cargo is going to get fixed. Xsacaraj on Twitch asks, do you also plan to build 2.0.4.2.js? So, two WebAssembly to use it in JavaScript. This idea flows around in our team quite often, to which my standard answer is why. When you look at Glean, it works for applications that are running for a long time that can send data in the background and collect metrics over time. That's the current design of Glean and that works for browser type applications quite well, be that on mobile or desktop. The JavaScript API inside Firefox has a lot of direct integration with C++ and Rust part of Firefox, so we can't use WebAssembly there because we still need to call into the native code anyway. For websites, I still can see some form of Glean being used, but I don't see the current Glean design working out for websites because in a website, even in a single-page application, you don't have this long-running thing. At potentially any time this tab, your website, could just reload. You don't have any real persistent storage, so you can't store that forever offline, but you need constant communication with the server. The way we bundle up metrics into Pinks also doesn't lend itself very good for a website. On a website, if you need some statistics or metrics about your data, you probably want to send them out quickly, so you send out small packets and you send them off before the user leaves the website again. I don't see the need to compile to WebAssembly. It could just be compiled to WebAssembly, I assume, and it could run in some of the WebAssembly interpreters because the core crate is just rushed and a little bit of C. All right, so thank you for all the questions. There were some really good ones in there, and now I'm actually tasked with selecting one of these as a good one. Let's see. I think one of the interesting questions here was from Dao on Twitch about what was the effort to ensure Rust objects live longer than Rust usually would do. I answered that we don't need to do much for that. We rely on our other tooling to do that, but there are ways in Rust to do this on your own by just allocating stuff and then forgetting about it and returning pointers. So that was definitely one of the good questions. Thanks again, everyone, for watching. There was a lot of fun. Thanks.
|
At Mozilla, Firefox is not the only product we ship. Many others — including a variety of smartphone applications, and certainly not just web browsers — are built by various teams across the organization. These applications are composed of a multitude of libraries which, when possible, are reused across platforms. In the past year we used Rust to rebuild one of these libraries: the library powering the telemetry in our mobile applications is now integrated into Android and iOS applications and will soon be powering our Desktop platforms as well. This talk will showcase how this small team managed to create a cross-platform Rust library, and ship it to a bunch of platforms all at once.
|
10.5446/52209 (DOI)
|
Amazing. Good evening, everybody. Thanks for joining for the last day of Rusty Days. We're going to chat for the next 30 minutes or so about observability. In particular, we're going to discuss if the Rust ecosystem at this point in time provides enough tooling to write observable APIs. And we're going to go through the journey of writing one and see how that came along. My name is Luca Palmieri. I work as a live engineer through layer. We're going to spend some words about that in a second. In the Rust ecosystem, I contributed the Rust London user group where I curate the code dojo. I've been contributor and maintainer of various crates in the open source ecosystem, Linfa, Wiremock, and some others. And I'm currently writing Zero to Production, which is a book on Rust became development, which I publish chapter by chapter on my blog, which you can see linked down there. So let's get to the meat of what we're going to discuss tonight. This is a little bit of our agenda. So we're going to see what Donate Direct is. Donate Direct is an application that is going to drive our whole journey. We're going to see what it entailed to bring that application to production. And then we're going to zoom in on three types of telemetry data, which are often collected to observe the behavior of applications in production environments. Metrics, logging, and distributed traces. If you don't know what they are or you have an experience working with that before, that's not a problem. We're going to give all the details and I'll work you through why they're useful and how we collect them. So let's start from the very basics. What is Donate Direct? Before that, let's get two words on what layer it is, which is going to frame the conversation. Now, Trulay is a company which operates in the financial technology space. In particular, we provide APIs for people to consume. We mainly provide two types of APIs. One for accessing banking data on the alpha user, and then one to initiate a banking transfer once again on the alpha user. So you pay using the room bank account without credit cards, without intermediaries of other type. During the COVID pandemic, as many of the people, we kind of tried to think what we could do in any way to help relieve pressure or contribute to what was happening. So myself, with a group of other colleagues, put together an application called Donate Direct, which lets you use our payment initiation technology to donate money to charities. So as you can see in the GIF on the left, the flow is very simple. So you select a charity from a list, you specify how much amount you want to donate, then you fill in some tax stuff, and you get redirected to the flow of your bank. And the money goes through your bank account to the charity without any fee. So Trulay did this completely for your church, matching some donations. Now, as it happens when you do site projects of different kinds, things that are a little bit outside the main product line, you have a chance to experiment with technologies, which will be considered a little bit too edgy to be used in the core product. And as you might imagine, considering that this is a Rust talk in a Rust conference, Donate Direct backend API is filling Rust. Now, it's not our first round with Rust, but it was our first Rust API in production year through the year. It was the first time we were actually shipping code that was responding interactively to users coming from the wild web. So I want to see lots of emojis when I rewatch the stream at this specific slide. Now, as I said, we experimented with it before. So we were doing build tooling, we were doing CLIs, we were doing some weird Kubernetes controllers for non-critical stuff and so on and so forth. But once you actually put an API in front of a user, then the bar of our API needs to be raised significantly, which brings us to our journey to production. Now, to use the words on someone as wise as I am myself, one does not simply work into production for a variety of reasons. And reason number one is that, generally speaking, production environments are very complex. So if we look at this diagram, this depicts Monza's production environment. So each of the blue dots is a microservice in Monza's cluster. And each of the lines connecting two dots are microservice talking to each other over the network. Now, the layer is not Monza, so it doesn't have 1600 microservices interacting in production. But you might imagine that our production environment is equally complex in many subtle ways. And what generally you try to plan for in a production environment is not even really the API case, so is stuff actually working, but you try to predict or to mitigate the way stuff can fail. So what happens if one of those blue dots, for example, in the Monza cluster goes down, what happens in one of those blue dots start responding more slowly than it generally does or is supposed to do, or it doesn't elastically react to search and traffic. All these kind of behaviors in a very connected graph like that can cause cascading failures. And it becomes very, very difficult once something like that is happening to troubleshoot why and fix it, if possible. Now what does each of those blue dots actually ease? Now, in the layers case, we run a Kubernetes class then. So all our production application are deployed on top of Kubernetes, which means those blue dots are Kubernetes deployment. Kubernetes deployment is just a service definition which is going to orchestrate a bunch of copies of the application. Each of those copies is called a pod, and the pod may be composed on one or more docket containers. The pods are identical to one another, and so they can be dynamically scaled to match traffic increasing, and they can also be on different machines in order to give us redundancy if one of those machines ends up going down for whatever reason. Now, what does it mean to release something to production? In a true layer, especially when you look at things from an operational perspective, you want to have a certain set of guarantees about what each of those applications provides from an operational point of view. This means, and this you want to be sure, that the set of best practices is being followed consistently. All those best practices are collected in a huge checklist called the pre-production checklist. Now, if you are a non-call engineer, the pre-production checklist is in many ways a very nice thing, in the sense that it gives you a baseline level of quality, especially on the observability side, as we're going to see, and you can be sure that those metrics and those logs are going to be there. Now, if you're a developer who's trying to deploy a new application, the pre-production checklist can be a significant hurdle, because there are a lot of things that you need to do in order to actually see your application out there. And so, keeping along with the load of the rings metaphor, they might look a little bit like the cost of the rings, and they look quite scary at this point in time. This is like the first movie. So, what's the dilemma? On the left, application developer, or in general, like, you're building something that looks really cool, you want to ship it. And when you are at the beginning of your startup journey, so when you are a Scanty group, doing a Scanty app that is only built by a bunch of people, we implicitly know that what you're doing is particularly risky, because a new product is a new company, you just did the rating fast, it's fine to just ship it. You put your cowboy hat on, and you just deploy it to production. Now, as you mature along your journey, you start to get bigger and bigger customers. And those customers will have enterprise expectations, so they want your service to be up. You will have SLAs with them, and in general, just your reputation will demand of you higher level of reliability. Now, if you are too late here and you work in financial technology, that is even truer, so to speak, or for example, a random consumer app, you don't expect your payments to stop working, they should always be working. And if they don't, that can cause some serious disruption. So software has to be treated as mission critical as much as possible. And to be reliable, there's a lot of best and wisdoms that you need to touch to your application. So metrics, tracing logs, horizontal political scaling alerts to know when something goes wrong, network policies to prevent escalations, liveness and that improves to that you need is going to restart something and so on and so forth. And the lease can get very long. And that is troublesome. Because in the end, that's my personal, personal model, I would say, is convenience beats correctness. What this means is that if doing the right thing is in any way, shape or form, more complicated than doing the wrong thing, then someone at a certain point in time will find the reason not to do the right thing. They deadline next Monday and really need to ship this application or dating is too complex and actually they don't need all that stuff. Like this is the small, small thing is going to run in the cost, it's not going to get big. Then it gets big, then it fails and then your problems. So you won't be able to fall into the so called beta success. They should naturally converge to doing the right thing because doing the right thing is the easiest thing to do. Now I'm not going to cover all the possible things that we require application to do is that will be long and potentially quite boring. We're still going to focus on telemetry data. So think in topic we talked, are we observable yet? What kind of telemetry data? So we said logs that we ship into elastic search, metrics which are scraped from the media from our applications and traces that we push into Jager for distributed tracing. So we're going to go one by one, look at what they are, why they're useful and how you collect them in a Rust application. So let's start from metrics. Why do you want to collect metrics? Well, generally speaking, you want to collect metrics because you want to be able to produce plots that look exactly like this. So you want to be able to see, well, what's the latency of this application in the last 30 minutes and potentially break it down by percentile. So the 50th percentile, the 70th percentile, the 98th and the 99th depending on the type of application, your performance profile. Or you might want to know what's the response breakdown. So how many 200s, how many 500s, how many 400s and so on and so forth. Metrics, generally speaking, are there to give us an aggregate picture of the system state. So they're there to answer Boolean questions very often about how the system is doing. Is our error rate above or below 10%? Is the error rate for requests that come to this specific API on this endpoint above or below a certain threshold? Are we breaking our SLAs on latency? And metrics are supposed to be as real time as possible. So to tell you what the system state is now in this very, very moment. What do metrics look like? So how do you actually get those plots that we just saw? Metrics are generally look somewhat like this. So you have a metric name, which in this case is HTTP requests duration seconds bucket. So we're talking about the duration of HTTP requests. And we're looking at the histogram. So we're looking at buckets of requests at different type, at different thresholds of latencies. On this metric, we have a set of labels that we can use to slice the metrics value. So we have endpoint, so point to rethink, HTTP method to get both boot batch, whatever. The status code we returned. So in this case photo four, and then you have the bucket that we're looking at. So five milliseconds, 10 milliseconds, 25 and so on and so forth. And then the number of requests falling inside that bucket. Now this was a super fast photo four. So all thousand six hundred one fell beneath the five milliseconds that generally is going to be a little bit more varied. This is basically a time series. A time series with a variety of values you can slice and dice from. These time series are produced by the application and then are aggregated by Prometheus at least in our specific setup. So Prometheus hits the slash metrics and point on all the copies of an application. It could be another endpoint, but that's generally the default aggregates all these metrics index system that allows you to perform queries against them. One way to perform queries is to do alerts. So manager, you define a variety of queries which evaluate for Boolean. So as we said before, is the error rate. So the number of five hundred above or below 10% for 15 minutes. If yes, then through major duty get an on color. So I'm called engineer to look at the system because something is wrong. Otherwise you can use Grafana if you just want to do some pretty visualization. So if we go back to the slide we saw before, which is this one, this is Grafana. So we're looking just that Prometheus query is visualized. This is very, very useful for an on quality mode of retention team to actually understand what is going on. Now, how do you actually get metrics? So how do you get your API to produce metrics? The redirect was developed using up to X web for a variety of reasons. Brother piece about that a couple of weeks ago. People could use. It's very, very easy. So there's a package on creates the video called up to X web prompt. So up to X web Prometheus. You just plug the middleware inside the application is that dot prop from dot clone line. The middleware takes some very, very basic configuration parameters. So prefix for the metrics and your point you want to use and then you set up. You're just going to expose slash metrics. Now you might want to customize it for your specific application because you might need to collect metrics which are known standards. You might have specific naming conventions and so on and so forth. That's where prom is like a single file type of create. So you can go there use it as some kind of a blueprint and adopt it to do what you need to do. So metrics useful, very easy to collect. Just plug and play using graphics. Logging. As we saw, metrics are about what is happening in the system in the aggregate at this very, very moment. So low latency, fairly aggregated type of data. Logs are instead useful to answer the question, what is happening to this specific request such as what happened to users who try to do a payment from let's say HSBC to Barclays in the UK between 5pm and 6pm on the 27th of July. There's no way unless I'm very lucky and the labels on the metrics are exactly the ones I need. Generally they aren't. Here's labels are supposed to be locality and all of the metrics. There's no way I can generally answer this type of question. Absolutely I cannot answer it to the single request type of granularity because those are all aggregated in metrics. Logs instead can provide us to that level of drill down that can allow us to slice and dice to get that precise level of information. That is key to actually debug what is going on in a distributed system, especially when things go wrong in a way which you haven't actually accounted for. The so-called unknown unknowns or emergent behavior in distributed systems. Let's look at what it looks like to log in Rust. A classic approach, Rust-wonder the approach to logging is to use the log crate. The log crate is built using a facade content. The log crate provides you a set of macros, debug, trace, info, warn, and error to actually instrument your application. This is an example taken straight from the log crate documentation or a place I think it was. You enter into the shape, the YAC function takes a YAC, in a mutable reference to a YAC. You need a trace level statement. You announce the world we are commencing the YAC shaving. It's trace level. It's at a very, very low logging level. In most cases, it's going to be filtered out. Then you loop and try to acquire a razor. If you get a razor, info level log statement, razor located, display implementation of the razor. You shave the YAC and you break from the loop and the YAC is in the function. If instead you fail to find the razor, then you emit a warning saying I was unable to look at the razor, you're going to retry. Now, facade means that you have no idea what is actually going to consume these log statements. You just instrument your code and then generally at the entry point of your binary, you are going to introduce a logger implementation. An actual implementation that takes this log data and then does something with them. Something is generally shipping them some places. If you use the simplest possible logger, which is generally a blogger, you're going to see something like this. You log into the console, standard out. In this specific execution which I made, you get unable to look at a razor for three times. We're looping three times and then you actually locate the razor. You have the log message, the name of the function, and then you have a timestamp and the log level. Now, this may work if you're doing common line applications. If it's a single main function running and you have a user looking at logs to understand what is going on. In a backend system, especially in a distributed backend system, you have applications running with multiple machines. These applications are generally some kind of server, either a web server or a QC consumer or something like that. They're executing many, many requests concurrently. You want to be able, at a certain point, generally later, so you're not really there, telling the logs to say, what happened to request XYZ, which was about these type of users, as we discussed before. The only way you can do that in plain logging is using text search. Text search is not easy to search. First of all, it's expensive. It cannot be indexed and requires a lot of knowledge about how the logs are structured. You end up, if you want to do anything that is non-trivial, so anything which is not telling me if this substring is in the log, you end up writing regresses. Writing regresses means that you are coupled to the implementation of the logging inside the application, which makes it very, very complicated for operators and support people to actually go and use these logs. All the pressure of operating the software ends up on the shoulders of the developers, which we want them to be there, but we don't want them to be the only ones who can answer questions about the system. A much better way is to have structured logs. Structured logs in the sense that to each log line, we associate a context. That context needs to be searchable, which in a very informal terms means that the context is in some machine-readable format that somebody can parse and index, allowing people to filter and edit and perform queries. Let's have a look and now we could do structured logging. Similar example, not fully identical. This time, the debug macro is coming from this log create. It's log standing for structured logging. It's log create. This one as well, but established being there for quite some time. It allows you to specify the log message, so very similarly to what we were doing before, and then allows you to specify using the all macro some key value pairs to be attached to your logs. Now, log has been for a very long time the only way to do structured logging in Rust. Recently, if I'm not mistaken, the log create has added a feature to add key value pairs to log statements. Once again, as far as I've seen, at least after a month ago, almost known of the log that implementations actually supported key value pair logging. You're once again down to log for doing structured logging. What are we trying to do here? What we're trying to do here is what we generally want to do in distributed applications. I want to know when something is beginning, I want to do some stuff which might be composed of some subroutines, so this subunit of work function. That might emit their own logs, so this event log, so cool. Then you want to know when that thing has ended. Then, given that we're saving the YARC on behalf of somebody else, so we're taking this user ID. I want the user ID to be associated to every log line, and I also want to capture along the whole application took. I want to capture that it laps milliseconds at the bottom. Once again, we plug into it the most basic type of format. In this case, it's a boonium format logging to standard out. We get exactly this. You see all the log statements, you see all the boonium metadata, and everything is a JSON. That means that I can parse this as JSONs, and I can filter user ID very, very fast, very, very easily. I can push all these things somewhere else, which is going to index them and search them. We're going to see that in a few seconds. Now, let's go back to the code. You may agree with me that this is very verbose. It's very, very noisy. You have a lot of log statements which are interleaved with the application code. You don't even see the application code here, but this function is really looking a little bit hairy. This is because, generally speaking, for most use cases, at least the ones I encountered in the world, having orphan log events is generally the drunk abstraction. You reason about tasks. Tasks have a start time, they do something, and then they end. What you really want to use as your primary building block when you're doing some kind of instrumentation for structured logging is a span. A span represents exactly a unit of work done in the system. Let's look at the same function using spans. We are moving away from slog. We're leaving slog behind for the time being, and we're moving on to the tracing crate. Tracing crate is part of the Tokyo project. I think it's not an overstatement. It's one of the most impactful crate lists on what they do on a daily basis, which has been released in the past year or so. It provides extremely quality implementation, and we're going to see issues on its perfectly. So a span. We enter into the function and we create a span. The bug level, so we set the level as if we were doing logging. We tell what's the name of the span, yuckshave, and we associate with the span the user ID. Now, the tracing crate uses a guard button. So when you press, when you call the dot enter function, then you're going to enter inside the span. Everything that happens between the enter function, method invocation, and the dropping point of the underscore enter guard is going to happen in the context of the same span, which means there's no need for us to add once again the user ID to the bug. There's also no need for us to do anything weird about subunit of work. Subunit of work can ignore the fact that it's part of the yuckshave function. It can just go on to do its thing, and they will be able to meet log statements. And if that was log statements that touch context, then we can also capture the context from the parent function. And all of these efforts happens pretty much transparently. What this means is that if we really want to shrink it, so if we really want to go to the essential of it, we can also remove those two lines of boilerplate so that span equal and then the enter function just is a tracing instrument procomacro, which is going to basically the sugar exactly to the same thing and leaves us with this function. Now, what's that? That's like one, two, three, four, five lines, considering there's a closing bracket, so four or five, depending on how you count it. If you go and compare that to our log version of this, you can clearly see how the agnostic implementation is now much less intrusive. It's, as we were saying before, much more convenient. It's much easier for developers to slap slash ash tracing instrument on top of a function, and so allow them to build very, very domain-oriented tracepons and do that consistently if that does not involve writing a loader code, it does not involve polluting their function code and is generally transparent to the application. Now, tracing just like log and just like log is a facade pattern. So what you do is to mentor application using those macros and then you have subscribers. Subscribers are the ones that actually receive this tracing data and can do something with it. So tracing can be used for structured logging. I think at this point in time is the best create if you really want to do structured logging. So you can log all those funds to standard out or to a file or whatever you think it's useful to you. At the same time, using funds and funds are exactly the concept used by distributed tracing, as we'll see in a second. So one type of instrumentation tracing and you're able to get at the same time, writing no extra code, but structured logging and distributed tracing. And this is extremely powerful and also extremely consistent because you're going to get the same funds across the two type of telemetry data. So telemetry data, how do we actually process logs and do we actually process traces? So logs, we take tracing, then we have a subscriber, the prints logs to standard out in boonion format is the tracing boonion log for matter, which I wrote for donate direct and some crates of the op if you want to use it. Then standard out is tailed by vector. Vector is another rust log corrector that we use to get logs from standard out to a lab les kinesis, which is then going into elastic search, which we then search using Kibana. So there's a bunch of hops, bring the onion up in Kibana. And Kibana is fairly good to search logs and you don't need to be a developer to search logs in Kibana. So you go there, you have all the possible fields of your logs on the left, and you can filter either existence, nonexistence on the specific value doing regular ccf you need to. You can be views and graphs. And in general, it's very, very friendly. And we use Kibana at all levels inside the company. So from the application developers to the product managers to the support engineers to the first level of support to customer success managers. That's what allows us to own in a distributed fashion, the operation of a product. Distribute raising is more or less the same thing, just from a different perspective. So when you talk about logs, it's generally about a single application. So you have this application that is there and is doing stuff and is emitting logs. Now, in a microservice architecture, as the one we have here at LayerM in many places at this point, to serve a single request, which is hitting the edge of your cluster, that request generally flows through 123456 different microservices, which cooperate to fulfill the job. Now, when a customer comes to you saying, I tried to do X and it didn't work, you need to understand where exactly the problem is. So you need to be able to trace that request across the different microservices. And it should be easy to do so. The way you do this, one of the possible ways is by adhering to the Jäger tracing format, which is now being evolved, but the open tracing format, which is now being merged into the open telemetry format. So on the tracing rate, you have a tracing open telemetry subscriber, which is maintained in the same repository. You can use that and we do to ship logs into Jäger, to ship traces into Jäger. Jäger is once again backed by elastic search, so it's more or less the same infrastructure, and allows you to have this kind of view. So each of the units of work up here as a bar, you track along each of those states. You can see all the different services that a single request from the outside flows through. That is very, very powerful to understand when something went wrong. So you're able to correlate that request across everything that is happening inside the cluster. So one final recap. As we said, production environments are extremely complex. And if you don't have any way to observe what is happening, and that generally means that in some kind of telemetry data, then your production environment is a ticking bomb. It might be in life today, but it's going to go off at a certain point in the future. And you're not going to like it. In order to know what is going on, you need to add diagnostic instrumentation. But for that to be there consistently, it is to be easy to add that instrumentation. And make it easy and convenient is your number one priority as an operator in general as an architect of a platform. Now, different type of telemetry data gives us different type of information. So metrics are great to alert and monitor system state. While logs, especially structured logging, with ICADD9D context, is amazing to try to detect and triage failure modes that you might not have prevented when you design the system. To get very equality structure logs, SPAN is generally the type of a structure that you want to use. And no matter how good your logging is at a single service level, you need to be able to trace our request across the different services. Either you do that with disability tracing or just a correlation ID that flows through, you need to have that somewhere. And overall, I guess the lesson learned is we were able to get a Rust application in production in less than a couple of weeks with top notch of observability and telemetry data. And that generally means that the answer to the talk, which generally is if you're doing a talk with a question as a title, the answer is no. Like Steve, the first day, the answer in this case is yes. So are we observable yet? Absolutely. Like tracing has been a step change improvement into the quality of the Rust log ecosystem when it comes to telemetry. And you can definitely shape high quality applications with very, very granular telemetry data. Now, the net direct was an experiment in using Rust in a live production application and we liked it. So in one way or another, probably the CTO was not fully sober when he said that, but we chose to bet on Rust to do some new core projects, in particular writing a core banking application, which in a nutshell means creating accounts programmatically and moving money in and out programmatically once again. We're assembling a team, we hide ready a bunch, we're still looking for one Rust we can engineer. So if whatever we call it here sounds interesting to you, just reach out. That's the opening there. That's my Twitter handle, like in many ways to get in touch. And with that, I think this is the end of the talk. And I'd be more than happy to take some questions. Okay. We have one. So let's waffle from Twitch is asking, does this telemetry setup integrate well with distributed non-rust applications? Well, it depends on how we, what do we mean by integrating well? In our specific use case, we do have some structures that we expect application to follow in the type of telemetry data that they produce. So for example, we expect metrics exposed by APIs to have certain formats or certain naming convention. We expect our logs to follow the canonical log pattern. So generally meet one log line with a lot of ITAD and ID data that we don't use to do variety of things. So generally speaking, there needs to be a little bit of coordination because of course, if somebody goes with the dot net core, the full format, and I go with the rust, the full format, and you go with the Python, the full format is very unlikely that they're going to really match up really nicely. But you can use architectural decision records to just say, these are we do logs. And then everybody implements in such a way that they can integrate. So it needs a little bit of coordination. Okay. So another one from Twitch. But Chris is asking, the trace create looks very powerful. Are there any features that you wish you did? There don't have to be easy features. I'd just like to hear your thoughts on the design space more. Well, yeah, the tracing credit is extremely powerful. I did raise some issues for some of the things that kind of surprised me. So some of those are made that way into the tracing credit itself, but some bug fix on a core dump. That was nasty. But generally speaking, has been amazing. Things that I wish would be different. So at the moment, the tracing credit has a lot of focus on making telemetry fast or in general, reducing the overhead of doing certain types of operations. For example, one thing is traces. So the metadata you collect about the spawn is statically determined at the moment of spawn creation. And that is great because then everything is much faster and consumes as memory. But sometimes for the way certain applications that are architected, you would like to be able to add additional metadata dynamically, even if that means allocating or doing stuff that might not be what you want to do in an off loop, maybe for that application and its performance profile works fairly well. The thing that we found was a little bit of a slippery slope was the instrument macro, which is very, very convenient because it captures name of the function, but it captures by default all the arguments of the function. And that can somewhat be tricky if you're managing secrets. So if you're managing things that you don't want to log. And so it's very easy to write a function today with the instrument macro up there. And somebody else comes two weeks from now adds another argument, which is a GWT token. And then the GWT token ends up in Kibana. So it would be nice to have the possibility or different markers or whatever to have the NIO approach so that I need to explicitly allow certain fields to be logged, which for the type of application that we do, we make us sleep better. But generally speaking, getting is great. And I think it's going to get more and more useful as different subscribers implementation coming to play. So not much to say there. Okay, there's another question coming from YouTube. So Jeff Varchesky, I hope I pronounced that even remotely correctly. How do you configure tracing to send its data to the various backends? Are there docs? That is also support cloud distributed tracing backends like AWS X-ray. So there are docs, absolutely. So if you go on the tracing subscriber crate, there are very detailed docs on how to add different subscribers to the tracing pipeline. At the moment, of course, there are some type of tracing subscribers implemented. But I doubt there are tracing subscribers for all the possible things. If you specifically you want to shift tracing data to X-ray, I think the work that has been done in open telemetry for Rust means that you probably have an implementation of the standard. And you might have to drive your own subscriber, using the result of that should be not too complicated to actually ship it to X-ray. But I haven't used it personally, so I don't know if it's out there already. Okay. Once again, from Solace Waffle, do you have an approach to avoid handling fields that contain personable identifiable information in telemetry data? Well, the approach at this point is trying not to put them there, which as I said before, responding to the other question can sometimes be tricky because of the way instrument works. So generally, we do have a detection system here at TrueLayer. So what we do is we continuously scan the logs with semantic parsers that look for certain types of secrets that we know might possibly end up in logs, like JLT tokens, AWS credentials, and other types of secrets that we don't really want people to have. But at the application level, apart switching instrument for being allow all to be in denial, we don't have necessarily any specific approach. Okay. That's another question. Once again, on YouTube from Jeff, general Rust question, what is your preferred strategy for dealing with error handling? Okay. Interesting. In most of the applications we're driving at the moment, we use a combination of these error and any how. So we use these error for all the places, but we need to end the letters. So it's very nice to get structured innings that you can match on and then do different things depending on the variant. And then when we just want to report errors, so we just want to have something that we log on with the time to people as a response, then we use any out and we generally use them in conjunction. So you might have an error in them, which is using this error to get the error implementation. And then the different variants are actually dropping and any hour. What we're starting to do recently, once again, leveraging the tracing create is using tracing error. So capturing span traces in our errors. So that when we get logs, they're actually very detailed about what they're pinched and then allows us to debug faster. Okay. I guess that means it's all for today. In terms of questions, I've been asked by the friends of Rusty Day's Tika winner for a Manning book promo code, I assume. In terms of best questions, that's going to be Solis Waffo from Twitch. So I think you need to stay online for them to reach out to you. It seems there's one more question though. Once again by you, so that doesn't change the winner anyway. What was your thought process for deciding to build this project in Rust? Were there any attributes that made this project a good fit for the first production Rust application at your company? In terms of the application itself, nothing specifically like we're talking of basically a client or an API that we suppose publicly is going to power a UI. So it's not necessarily, doesn't mean necessarily to be the fastest, doesn't mean necessarily all the guarantees that Rust gives you. We could have done that in any language. But we were looking to use Rust for other types of projects. So for mission critical projects, in particular to leverage Rust's very strong tech system as combined with this very predictable performance profile. But it's somewhat of a big leap to adopt a new language when writing a new mission critical project only to find out when you actually release it that you might always have a lot of time. So these ways are very nice incremental steps to de-risk the technology. So for example, look at all the observability situations say, is this actually ready for what we need to do? And look at all the things that we need in an API and can we actually write APIs with this and so on and so forth. So it was very much the risk operation. And as we de-risked all of these aspects, then it became possible for us to say, okay, now we can confidently bet on it for these other new products that we want to do. And there's a huge project and that fits Rust's profile for a variety of reasons. And now we know we're not risking too much. We're still taking some risk, but it's not as big of a risk of passing from a small ally to mission critical product as they do. Okay, it's good nighttime. So thanks a lot for tuning in for Rusty Days and stay for the next talk from Dick McManara on Unsafe Good. Have a good evening. Bye bye.
|
Is Rust ready for mainstream usage in backend development? There is a lot of buzz around web frameworks while many other (critical!) Day 2 concerns do not get nearly as much attention. We will discuss observability: do the tools currently available in the Rust ecosystem cover most of your telemetry needs? I will walk you through our journey here at TrueLayer when we built our first production backend system in Rust, Donate Direct. We will be touching on the state of Rust tooling for logging, metrics and distributed tracing.
|
10.5446/52124 (DOI)
|
Hi, my name is Robert Haas and this is my talk on avoiding, detecting and recovering from corruption for PGCon 2020. As many of you know, I am a long time contributor to PostgreSQL and PostgreSQL Committer and I work at EnterpriseDB where I've worked for about the last 10 years and I'm currently the chief database scientist and today I'm going to talk about avoiding, detecting and recovering from database corruption. So just to give you a brief overview of how the talk is going to go, I'm going to start by trying to give a definition of corruption. I'm going to talk a little bit about the three main causes of corruption. I'm going to talk about some best practices for avoiding corruption and for detecting corruption. Then I'm going to talk about some of the possible signs that you have database corruption and finally I'm going to say a few words about how to recover from corruption. This is only a 45 minute talk and there's a lot that could be said about this topic so I'm really not going to have to go, time to go into as much detail as somebody might hope. I know also that there are probably a lot of people in this audience who have had a lot of experience with this and may know more about some parts of it than I do or maybe even all parts of it than I do. So I'm certainly happy to hear other people's experiences and suggestions. This is just what I've experienced in my own work mostly at Enterprise DB and a little bit from experiences that I had before joining Enterprise DB because Enterprise DB is a post cross QL support provider. People tend to come to us when they have serious problems and database corruption is generally a serious problem. So the good thing about that is I actually reasonably often get a chance to see what's going on in cases where someone is having a problem with database corruption and so even though any individual post cross QL user probably isn't that likely to have corruption. Many of the people whose cases I get to hear about actually are people who have had a database corruption problem of one kind or another which is what inspired me to write this talk. I think it's a little bit difficult to give a great definition of database corruption but I think I did my best to come up with something. I think there are two main points that I want to emphasize here and the first one is that database corruption really has to do with expectations. If we store the number four into the database and then later we try to retrieve the data from the database and instead of getting back the number four we get back the number five or 72 or 19 million then for some reason our expectation has not been met. What we thought should happen and what actually did happen were different and there could be a variety of reasons why that happens. One is that our expectation might be mistaken. We may be misunderstanding how the database is intended to function. A second possibility is that the database has a bug and a third possibility is that there is database corruption and I think the way that we can distinguish between these scenarios is by saying that for it to be database corruption there has to have been a modification of the contents of the database. So that's why I defined it here by saying that the database contents are altered in such a way that queries behave in an unexpected manner relative to the SQL statements that have previously been executed. If we insert some data and then we try to retrieve it back and there's nothing fancy going on like a trigger that should have modified the data or concurrent activity which should have modified the data then we really ought to get back the same data that we put into the database and if we don't that's corruption or if we get an error that's corruption or if the system outright crashes when it tries to read the data that's corruption. That's all assuming the problem is with the database contents rather than, you know, for example a bug in the database. Generally corruption can happen in three ways. First of all you might have bad hardware. I think by far the most common example of this is a bad disk because probably many of us have experienced the scenario where you put some data on the disk and then later when you go to try to access the data you don't get back what you stored or you get back errors. That's pretty common. Occasionally you also see bad memory where the memory that is used to store the data on a temporary basis on its way from your keyboard to the disk actually changes the data which it's not supposed to do. It's supposed to remember the same data that you gave it and so you end up with something on disk that is not what you expected to end up with on disk and that is also database corruption. The second sort of way that corruption can happen is you can have bad software. All of the software that's involved in making PostgreSQL work the way it's supposed to is very complicated. You have not only PostgreSQL itself but you have a file system, an operating system kernel, a backup tool perhaps and maybe other things and all of these are complicated pieces of software and even a very simple piece of software probably has some bugs and a more complicated piece of software is even more likely to have bugs. That's definitely a possible cause of database corruption and that like bad disks is something that happens pretty regularly. Finally there's user error which I have found to be a very common cause of corrupted databases and one of the most common causes of this is faulty and backup and recovery procedures. People who develop ways of backing up and restoring PostgreSQL databases that are just not safe, that are not what is recommended by the documentation and that are not a good idea. In some ways this is probably partly the fault of our documentation which probably is not as good as it could be in terms of telling you what you really need to do in order to end up with good backups and good restores but there's also the problem that people don't necessarily read that documentation, they don't necessarily follow what that documentation says. In any case, very broadly speaking those are the possible causes of corruption, hardware software and some kind of user error. How do we detect corruption and how do we avoid corruption? I've divided some best practices that I recommend into four areas, backup and restore, configuration, storage and finally administration and I'll be going through each one of those areas and saying a little bit about it. First of all, backup and restore, this is sort of the short version of the same backup talk that everyone gives when they talk about backup. The most important thing about backups is that you have to take backups, you have to take backups regularly. If you can take a backup then or if you do take a backup and then later you have database corruption you can restore from your backup and get your data back and avoid having any kind of permanent data loss. In addition to that, the very fact that you can take a backup means that all of your data is still readable. Something is still there that can be read. It might not be the right thing, it might not be the content that you were hoping to have but you have some content, you have something in your database and that's a good thing to know. You should also make sure that you can restore your backups. If you have backups that you've taken but you can't actually restore from those backups they're not very useful to you in the end. You should not only go through the act of restoring them but make sure that the restored backups actually look okay. That they contain the data that you expect them to contain, that the data is accessible and that everything generally looks like it's okay. Again if that should prove not to be the case then your backup would not be very useful and all of the effort that you went into setting up your backup tool or your backup regime and all the storage space that you used for those backups would really be for nothing. In order to do all of this well I think a really good idea is to use a good professionally written backup tool rather than a homegrown script. It's been my experience that many PostgreSQL users actually do backups by looking at the documentation, following it more or less well and writing a script themselves that does the things that are listed in the documentation. That's not actually a very good idea in my experience because those scripts tend to be very simple and they tend to contain mistakes. Sometimes they might be subtle mistakes like not F-syncing files when you really should so that a concurrent operating system crash can cause a problem but sometimes they're really obvious mistakes. Many people seem to write homegrown backup scripts that contain no error checks whatsoever or that do terribly unsafe things like removing the backup label file for no reason and those kinds of mistakes will definitely end in database corruption. If you use a professionally written backup tool obviously that's no guarantee that there won't be bugs because as I said before pretty much all complex software contains bugs and backup tools definitely fall into the category of complex software. But hopefully at least the person who wrote that software is more knowledgeable than you or as knowledgeable as you at least and hopefully also because that software is an important part of their business that are going to be motivated to find those bugs and fix them, they're going to presumably have multiple customers who are using that same backup tool and so it's more likely to be thoroughly debugged and I think your chances are just better than if you write something yourself. Moving on to configuration, there are basically three settings in the PostgreSQL configuration file which you need to get right to avoid serious risks of data corruption and two of them are pretty easy. Fsync and full page writes, both of them need to be turned off. There are use cases for turning each of them on but they're pretty narrow and I think it's actually really hard to get right and really easy to mess up. If you have Fsync turned off and you have an operating system level crash you are very likely to end up with a corrupted database. Basically the only time you won't end up with a corrupted database in that situation is if the system had been idle for a considerable period of time before the crash. Most people turn Fsync off temporarily during their initial data loading and then they don't actually turn it back on or maybe they modify PostgreSQL.com so they think it's turned back on but they actually didn't restart the database server and so it's not really turned back on. Similarly with full page writes, some people who are running a copy on write file system for example may think, well my copy on write file system is going to prevent torn pages. It will either write the entire page or none of the page so I don't need the protection that full page writes is documented to offer. In my experience that's generally not the case and one of the big reasons is that PostgreSQL uses an 8k block size whereas the Linux kernel page cache uses a 4k block size. No matter what your file system does and no matter what your disks do you are not going to get writes that are atomic and an increment larger than 4k and your PostgreSQL block size is going to be 8k unless you are using a very unusual configuration that we really haven't tested very much and that isn't widely used by anybody. So in practice you just need these things to be on. The wall sync method configuration parameter is also pretty important and it's a little harder to set properly because unlike the previous two examples the default isn't necessarily safe and there's a lot more choices than just on and off. On Mac OS X the default value is F sync but in order to make it safe you actually need to use the F sync write through method and on Windows I don't actually have much personal experience on Windows but what I have been told is that you need to use either F sync or F sync underscore write through or disable write caching on your drive. Generally a good practice here is to run the PG test F sync utility which is included with PostgreSQL, with every PostgreSQL distribution and see whether the method that you've chosen is much much faster than some other method. If it's a few percent faster maybe even ten percent faster that's probably fine if it's ten or a hundred or a thousand times faster than some other method. It's likely that the wall sync method that you've chosen doesn't really work on your platform and doesn't really guarantee that the data is durably on disk and then there is a risk of database corruption in the event of an operating system crash. This is a kind of a disappointing area like you would really hope that hardware and software would be designed in such a way that if you said I need this data to be durably on disk it would definitely end up durably on disk but in practice that's not really how things work so you really have to be careful about this and make sure that you've got a safe setting. Another thing that you can do apart from PostgreSQL.conf is run with checksums enabled. These are an optional feature and there is some small performance regression it's probably just a couple percent in many workloads but there are times when it can be ten percent or even more depending on your workload so if you're concerned about the performance impact you should test. If your primary concern is corruption detection then running with checksums enabled is a great idea. It can't prevent your database from becoming corrupted but it does make it a lot more likely that you will notice that you have corruption because it means that every time we read a checksum page from the disk the checksum is going to be verified and if it doesn't verify you will get an error and if you are paying attention to your logs you can notice that error and be aware that something has gone wrong. So to use this option you can run initdb with the dash k flag when you first create your database cluster or starting in version 12 you can shut down the database and use pg checksums with the dash e flag to enable checksums and then start the database up again. There is not currently an option for enabling checksums while the database is running. With regard to storage the best advice that I can give is that you want your storage stack to be as simple as possible and I think this is something that's becoming increasingly challenging for many people and I'm not quite sure what to do about that but nevertheless simpler things seem to be more reliable and have fewer problems than more complicated things. For example if you use local storage rather than a network file system there is just a lot fewer components involved. You have one server involved instead of two and you have a drive controller involved and you have a drive and a file system but you don't have any network interface cards involved for example and you don't have a switch or a router that's routing packets between two machines and could potentially go wrong and there's generally less to configure as well so if you can use simple local storage my experience has been that that is a lot more likely to be reliable and to not cause you any problems. It may certainly be possible to set up things like NFS or iSketzy reliably but I've seen a lot of setups that seem to be unreliable and so I've become quite skeptical about those. I think one of the issues is that many people don't really consider very carefully the options that they use for things like NFS setups and there certainly seem to be things that can be done on the NFS side to improve your chances. For example NFS v4 I think it tends to be better than earlier versions of NFS. You want a hard mount for sure not a soft mount so you want to set the hard option and you want synchronous operation rather than asynchronous operation for exactly the same reasons that you want to pick a wall sync method that's actually reliable and really makes a guarantee that your data is on disk. PostgresQL really relies on the data being on disk when the system tells it that it's on disk so if it's not on disk yet PostgresQL really needs to know that and it really is very very important that the operating system doesn't tell it that the data has been written down to the physical disk until that's actually the case. In the case of NFS what that means is you need the sync option and from what I can tell it appears that you actually need to set that option in two places. You need to set it on the NFS client that is the server which is mounting the remote disk but it seems that you also need to set it in the export on the server. So the NFS server which is providing the file system as a remote file system also needs to have the sync option configured on that side in the Etsy exports file or some equivalent in order to make sure that both the client and the server are completely clear that this is supposed to be a totally synchronous mount. Obviously, there are performance consequences of that but the positive consequences that you might be less likely to end up with a corrupted database. I strongly recommend the use of RAID particularly RAID 10 but as with everything else it's important not to rely on it too much. The advantage of something like RAID 10 is that for every disk you have whatever its contents are supposed to be you have another disk which is expected to have the exact same contents and that means that if something goes wrong with one of your disks and it's either not accessible at all or the contents seem to have gotten scrambled you have another disk that you can go look at and maybe that one is okay and you can get your data back. So in some cases you can just rip the bad disk right out of the machine and continue using the other copy of your data and be right back in business or at least somewhat more back in business and that's a really nice place to be but of course you still need backups. It's entirely possible for both of your disks to either fail or for both of them to get corrupted in the same way. I recommend a lot of caution when choosing a file system. I think there is a lot to be said for using file systems which have a good reputation for reliability and which are very widely used. I believe that EXT4 and XFS are the only Linux file systems where I don't know of a case where somebody had a corruption problem that could be directly traced back to some behavior of the file system. I think every other file system that I've seen somebody use at least on Linux has had that problem. NFS definitely, I know I'm about to make somebody in the audience very upset but my experiences with ZFS have been pretty negative. I've seen multiple customers who had very serious problems that seemed like they could not be explained by anything other than horrible bugs in ZFS so I can't recommend that even though I know it has cool features. I think not losing your data is one of the coolest features that you can have and EXT4 and XFS both seem to do very well there. EXT3 was very poor in terms of reliability and EXT4 seems to have made a really large improvement and that's really good to see. I would recommend sticking with those. Obviously other people may have different experiences so I can just comment on what I've seen. The other thing that's really important in regard to storage is that you need to monitor both your PostgresQL logs and your operating system logs for storage related errors. After all, if you have storage related errors and you're not looking at the logs the problem is just going to get worse and worse and worse and eventually you're probably going to find out about it at a really bad time. It is really surprising how many people just kind of assume that all of this stuff is going to work and it's not going to have any problems and I guess the reason why we assume that is because we all know that computers in 2020 are pretty reliable and most of the time they do work but you really want to know if you've got a problem. If the operating system reports the problem to PostgresQL then PostgresQL is going to log an appropriate error message in the log file assuming that you have your logging set up properly and you want to notice that. If you see IO errors for example starting to show up in the PostgresQL logs that's something where you're going to want to take corrective action as quickly as possible. But there are also sometimes when nothing gets reported to PostgresQL. PostgresQL gets told by the operating system that the operation succeeded but actually there were signs of trouble and sometimes that's because from the point of view of the storage system nothing has actually failed yet and maybe for example the disk is relocating your data from a sector that seems to be going bad to some other part of the disk that's still good. So in a case like that there might not be an error that PostgresQL can report but there might still be something in your kernel log that tells you that that happened and that's the kind of thing that you probably want to know about. Finally in terms of best practices there's the whole topic of how you administer your database. The important thing here and I feel a little bit silly saying this is it's really important that nothing other than PostgresQL modifies your PostgresQL data files and I'm not talking about text configuration files. There are a few text configuration files in the PostgresQL data directory which of course it's perfectly fine to modify those files with a text editor. But the rest of the data files are not designed to be manually modified or removed and if you do modify them or remove them you may end up with database corruption. In fact it's quite likely. We see people for example look at PGXLog or PGCLog renamed in newer releases to PGWall and PGXHack and they say well there are a lot of files in here and some of them are old so I'm going to remove the old ones and that typically ends very badly for those customers or those users because PostgresQL intends that it should manage those files and it should decide to remove them and if you remove them before it thinks that they're ready to be removed then you end up with a lot of problems. I gave a version of this same talk at a PostgresQL conference in India back in February and many people came up to me afterward and asked questions and one of the most common questions was essentially hey when you said that we shouldn't modify the data files in any way did you really mean it because here's what I'm doing and all of those people then went on to describe things that were not very safe and I later wrote a blog post which I've got a link to it here on the slide and there's a few more comments on the blog post proposing other things that people think that maybe they should be doing in terms of manual modification of the database directory and generally that's a really bad idea. It's really really easy to corrupt your data that way. The next couple of points on this slide are actually just variants on the same theme. When you think about it, antivirus software is kind of a crazy idea. Antivirus software runs around your machine and looks for files that it thinks are dangerous and sometimes it removes them otherwise known as quarantining them and sometimes it modifies them otherwise known as removing a virus from the file from its perspective and that's pretty crazy right? You've got this software that's running around looking at every file on your machine and making modifications to the files according to some algorithm that it thinks is good without having any real idea of how those files were intended to be used. Needless to say this can mess PostgreSQL up. It is a good idea if you absolutely have to run antivirus software on your PostgreSQL server to at least exclude the PostgreSQL data directory but I recommend not running it on your database server at all because it's not really going to protect you against anything, it's just going to corrupt your database. Sometimes that kind of software doesn't really totally respect the option saying that the PostgreSQL data directory should be excluded and it just goes and modifies things anyway which is not good. Do not remove postmaster.pid. This file is to some extent a text file. I mean it contains four or five lines of text and sometimes people feel that it's okay to remove but generally what that lets you do is get two copies of the postmaster to run at the same time on the same data directory which is incredibly bad and will almost definitely corrupt your data even if you're only reading the data with the database there still may be file modifications going on at the physical level and you really cannot afford to have two copies of the database doing at the same time, that at the same time. Also consider performing plug testing. If you have a database server and it's not in active production use and you can just run your workload against it or a simulated workload that is similar to your actual workload and then rip the plug out, start it back up again, see if everything looks okay. That's a great idea. It's a great way to find out whether you have problems. It doesn't guarantee that you don't have problems but if you do it several times and everything is okay every time, that's at least some kind of an indicator that things aren't too bad. If you do it once and everything breaks well then you've probably got a fairly serious problem. If you do get corruption what does it look like? Well typically what it looks like is you get error messages in the log files. It isn't the case that every instance of database corruption causes errors but it's pretty common. Sometimes you might just get different content out of the database but remember that PostgreSQL data files are, they have a structure to them. There's a page format, there's a tuple format, there's hidden metadata that you as a user don't see but which the database system uses for its own purposes. A lot of times if you just corrupt some random portion of a database page you might get some kind of error when you attempt to access the data that's stored in that page. Errors are pretty common result of database corruption. Unfortunately the range of errors that is possible here is very wide and there's no simple rule to determine whether a particular error that you might see is the result of database corruption or whether it's the result of something else like a bug or even the result of something that your application did. But generally what you want to be on the lookout for is errors that seem to be complaining about things that are internal to the database system rather than user facing things. Here are a few examples that I've seen working here at EnterpriseDB. The first one says that it could not access the status of some transaction and the reason that it is given why it couldn't do that is because it couldn't open some file, pg underscore xact slash zero zero zero three because there's no such file or directory. Now we don't really need to know what that file is or what it does. In fact the fact that we don't necessarily know that is a good sign that this message may be indicative of corruption because it's not complaining about something that we did. If you try to insert a row into a database which would violate a foreign key constraint and you get an error message saying hey that row would violate the foreign key constraint, there's a clear relationship between the SQL statement that you attempted to execute and the error that you got. And the problem is not with the database itself in that case, the problem is with the SQL statement and the way that it conflicts with the foreign key constraints that are in place. But what we have here is a message that's complaining about something that has no direct relationship with anything we did. We may know that we execute read write transactions against the database but we don't know what numbers the system uses to identify them internally. So you know transaction three million eight hundred and eighty one thousand five hundred and twenty two that's not a number that we chose that's a number that the system chose and we didn't create that file pg underscore xact slash zero zero zero three. The system decided to create that file and it decided that that file needed to be accessed and now it's telling us that that file is not there so something has gone badly wrong. And all of the examples on this slide are really of that same nature. The second one is complaining about not being able to read a block that it expected to be there. Again, a block is something that's internal to the system. It's not a row, it's a block. So the fact that it isn't found must be because the system made a mistake or someone changed things in a way that the system wasn't expecting. Failing to refind a parent key in an index. Well we create indexes but we don't know what parent keys are as a user. I mean if you happen to be a developer as you're watching this video you may know exactly what this means but a user might not and there's no reason they should. So the fact that they're getting an error about it likely means that the database is corrupted. Flash lookup failed is an extremely common kind of message that you see when your system catalogs get corrupted and so forth and so on. All of these are messages complaining about internal things to the database that the user shouldn't need to know about but the system is now complaining about because somehow they've gotten messed up. Sometimes you don't get an error. Those cases can be really hard to troubleshoot. There are cases where corruption causes a database operation to go into an infinite loop. For example an index that is intended to put all of the data in order might contain a circularity and some operation might just go around and around in a loop forever. Or you know you might have a crash or something like that that's caused by corruption and those cases are pretty hard to troubleshoot but I think they're a lot less common than these cases that just produce errors. So what happens if our database does become corrupted and we know it's corrupted because we saw a funny error message or some other symptom of corruption and now we want to recover our data? Well the first thing that we really ought to do is try not to recover the data from the damaged database. Try to get it back in some other way. So for example if we've got a master and a standby and the standby becomes corrupted just rebuild the standby. Don't worry about the fact that it became corrupted. Don't try to fix the corruption. Just throw it out and start over. Similarly if you've got a problem with a master and a standby and the problem is only on the master maybe fail over to the standby and then later you can rebuild the master at a separate time. Or maybe even if the problem is on both the master and the standby perhaps you have a backup that you can recover from rather than proceeding with your damaged database. Or another thing that's actually quite common is someone may be using PostgreSQL for reporting purposes but actually replicating the data into PostgreSQL from some other source and in that case it may be possible to just throw away the whole database cluster and rebuild the data from that external source. And that's a great option because it avoids having to repair a damaged database which is a pretty difficult and somewhat risky thing to do and you can't really be certain how well you're going to succeed in getting your data back. So you want to avoid it whenever you can. However that's not always possible. Sometimes you really have no realistic option other than trying to recover the data from the corrupted database. It may be that the corruption happened a long time ago and you didn't detect it. So by the time you do detect it, it's present on your master, it's present on your stand bias and it's present in all of your backups and if it's not coming from some other source then all of your copies are corrupted and they're all corrupted in the same way and there's really no help for it but to try to do the best that you can with the corrupted database in that situation. So what should you do if that happens? Well I'm going to tell you what I recommend, some techniques with which I've had some success but I do want to offer a disclaimer that this is something that should be done with extreme caution. You should consider hiring an expert to do it for you rather than trying to do it yourself and if you do try to do it yourself, please keep in mind that the advice I'm about to give you is based on my experiences and I hope it will be useful to you but it is not guaranteed in any way whatsoever. So if you try to do this you do it at your own risk and it's entirely possible that you may lose or further corrupt your data and it isn't also entirely possible that this advice that I'm about to give you is entirely wrong or at least wrong for your particular situation and is actually a terrible idea and I disclaim responsibility for all of that. Actually what I recommend if you try to do this is a two part approach and step one is to make sure that you have a complete backup of the database and what I mean here is a physical copy of all of your files or as many of your files as you can access. The important thing to understand here is that when you try to recover data you are going to be doing things that may make the problem worse and so if you have made a copy of all of the files before you do any of that you will at least be able to get back to the state that you were in when you first decided to try to recover your data in this way and that's good that means you can try again or at least not end up any worse off than you were. Having a copy is a really really important step. Once you've done that the second step is to try to use PGDump to back up the contents of the database and then restore those contents into a new database created by a new initDB and the reason why you want to proceed in this way is that if you just hack on the existing corrupted database until it seems to run again you may get it running and it may seem like things are okay but you may have lurking problems that you don't find out about until much later. So that's kind of a scary prospect and by dumping and restoring everything into a new database you may have already had some corruption that occurs but at least you know that any hidden metadata or internal structures to the database have been totally re-initialized and you're not going to have trouble with that kind of thing down the line at least not as a result of the corruption that you've already experienced. Naturally there could be future corruption events as well especially if you don't find out why the corruption that you already had happened in the first place but I think it's moving in a good direction. Sometimes this process goes smoothly this dump and restore process goes smoothly but pretty often it doesn't and there are a couple of different ways that it can go wrong. One way that it can go wrong is you can find that actually the database just doesn't start. Your corrupted database you try to start it up and you cannot start it and this can happen for a variety of reasons for example it may be that all of your wall files were lost or corrupted and you can't get them back from anywhere and without them you can't start the database. So what do you do? Well there is a tool called PG Reset Wall or PG Reset X log that can often allow a corrupted database to start and the important word in that sentence is start. It's really important to understand that PG Reset Wall does not fix the corruption in your database. In fact in some sense it makes it worse. It's just trying to hit it with a hammer hard enough that the database will start up and if you have the kind of problem that PG Reset Wall can fix it usually will fix it. It's a pretty reliable tool at what it does. It usually will make a database with this kind of problem start. There are exceptions certainly there are cases where it's not going to work but if you're missing wall files or you're missing PG Control this tool has a pretty good chance of making the database start up again and that gives you then the ability to run PGDump perhaps. If you have a database that's so badly corrupted that there's no hope of starting it whatsoever then rather than trying to proceed in the way that I mentioned before where you try to get it up and run PGDump another option is to use the PG file dump tool. This used to be maintained by Tom Lane when he was at Red Hat and is now maintained by some other folks and newer versions of it have an option that can take a raw Postgres QL data file and extract your data from it with a little help from you to tell it things about data types. So if you have a database where you're missing maybe you're missing a lot of files there's no hope of getting the thing to start up then that tool may be helpful to you. If you get the database to start the next problem that you might have is that running PGDump might fail and that's because PGDump has sanity checks inside of it and your database is corrupted so it's probably in some way insane and so it's possible that you're not going to be able to dump it. One way that you can get around this problem is just drop the stuff that you can't dump. If this is an index it really isn't costing you anything because you don't need the index in order to dump out the contents of the table or if it's a temporary table that was left behind by a system crash you could just drop it. Sometimes you can drop other objects that you don't really need maybe you have a stored procedure but you know what it did so you can just drop it and recreate it in some way by some other means. Sometimes you may even decide that the contents of a table are not that important and you're just going to blow the whole table away in the hopes of dumping the rest of the database. So all of these are techniques that you can try to use to make PGDump succeed. There's other things that you can do too for example manual system catalog modifications to try to fix problems that you may find there. Sometimes you can even try to just dump part of a table. If a dump of the whole table fails maybe you can use a where clause perhaps based on the hidden CTID column to extract just some of the blocks or some of the rows in the table rather than dumping out the whole thing. You can't do that part with PGDump but you can write your own select queries. Sometimes catalog corruption which is a very common cause of this problem can be effectively diagnosed by using a tool that EnterpriseDB publishes called PGCAPTEC. I designed this tool and I was involved in the development of this tool along with several of my colleagues. I find it pretty useful, your mileage may vary but it goes through and does a whole bunch of sanity checks on your system catalogs and tells you about the problems that it finds. If you fix those problems then you will probably be able to run PGDump if the problem was catalog corruption. So I think that's pretty useful. So let's suppose that you get the database to start and you manage to use PGDump or a raw select query or a copy command or something like that to get your data out. At this point you may think that you're in pretty good shape because all you need to do is restore your data but it's actually still possible to have failures on that side too. Basically the reason why this happens is because at this point you know that you don't have any problems with any hidden database stuff, you've converted everything to text format so there's nothing of the database internals that can get in your way at this point but your data can still be logically inconsistent. It might be that whatever kind of database corruption you had resulted in a foreign key violation or a unique constraint violation or something like that which the original database failed to detect as a result of the corruption but when you try to restore it into a clean database it does detect it and you get some kind of an error. And in this case a human being needs to decide what to do according to the relevant business logic. If you have a duplicate record for example it might be right to drop one of them or merge the two of them or something like that but there's no general formula a human being needs to figure it out. If you get through all that you're probably in pretty good shape. You may not have recovered all of your data but you probably have as much of it as could easily be extracted from your original database and because you dumped and restored into a new database you've gotten rid of any internal problems that the old database had hopefully and that's usually a pretty good start. And of course you always want to go back as well and try to figure out why the corruption happened in the first place otherwise you may just keep having the same problem but at least it gets you back on your feet for the moment. That's all the slides I have and that's all the time we have so thank you very much if you're listening to this talk if you listen to the whole thing that's really great and I appreciate it a lot and hopefully we'll have an opportunity to do some online questions and I'm looking forward to that. Thanks a lot. And we're live with Robert to give some questions and your answers. Go ahead Robert. Okay, so just looking through the chat here the first question that I see is from Amit who asks, hi I saw that the size of the PG control file is zero on a Postgres server. What might have caused this by the way? No disk was full. The short answer is I don't know. I think it's really hard to troubleshoot a problem like this based on just the amount of information that somebody can provide in a chat. I really have no idea. I mean it could be a system crash after the file was created but before the file was f-synced maybe f-sync was turned off and a crash happened maybe somebody manually zeroed out the file. I mean I get this kind of question a lot and I always kind of wonder what people are expecting me to be able to say as an answer because in a sense it involves looking into the past. How did it get that way? And that's often a pretty difficult question to answer so I really don't have a specific answer for you but you just have to sort of try to investigate what happened on the system before the point when you observed that problem. The next question I see here is is there a way to tell how many rows of a table are corrupted and will be zeroed out if you use zero damaged pages equals on and vacuum re-index the table? I think the short answer is no. There's really no easy way that I know of to tell how many rows of a table are corrupted. I have had very bad experiences with zero damaged pages for precisely the reason that you allude to in this question which is that if you turn zero damaged pages equals on and you start doing stuff you don't know how many pages it's going to zero and it might be a lot and the time or two that I've tried to use this it was a lot and nothing good happened after that. So I don't recommend that as a corruption recovery strategy. I actually kind of think we should consider just taking that option out completely because I think it's just way too dangerous to have something like that that just erases your data in a fairly uncontrolled fashion but opinions may vary on that topic. By not using homegrown scripts for backups I use PG dump within a script but I don't think you mean not to script PG dump. What homegrown scripts do you mean? Yeah, I'm not really talking about PG dump here. That question was from DBL by the way. I don't really mean PG dump. I think people do script PG dump and that's probably mostly okay. Maybe you want to try to make sure that your dump gets f-synced or that you have some check that it's completely written but generally I don't think that's a terrible idea. The place where people get into trouble is when they're backing up the database using the hot backup methods where they're calling PG start backup and then they're copying the data files and then they're calling PG stop backup and I've just seen so many horrible ways of doing that wrong. It's really terrible. I also see another question from Amit here who asks how can I recover PG control? I mean there's no magic answer to that right? If you lose a file, any file, there's no magic wand that you can wave that will get you back the file exactly as it was because there's no undelete functionality available. There are a couple of things you could do. You could go to a backup and you could try to get PG control from your backup but of course if things have happened since then the contents from the backup might not be very sane compared to the current state of the system and that's a very serious danger for that file. Also, another thing you can do is you could run PG reset wall but as I said in the main body of the talk, that will create a new PG control file and it has a good chance of making the database start but it in no way means that the database is not corrupted anymore. The database is definitely corrupted and part of the thing that I hope people will take away from this talk is that you can't really undo it once something has become corrupted. There's no way to go backwards from a corrupted state to an uncorrupted state and just have whatever got damaged not be damaged anymore because that would basically require some sort of magic that we don't have. It's very important to make sure that you don't get into this situation in the first place and that if you do get into this situation you have the kinds of things that I described in the talk to help you recover like backups, stand-byes and so on. The next question here is from Azim who asks why is PG Cat Check not in core or contrib and did you consider contributing it? We did consider contributing it at the time. I don't remember exactly why we didn't submit that to Post-CrossQL. I suppose we still could. If I get a bunch of feedback from people who would be likely to comment in a hacker's discussion and they all say, hey, yes, please submit that, I'll certainly talk to EDB management and see if they're up for that and they might well be up for that. I suppose one of the nice things about having it in a separate place is that we don't have to argue about whether it's a good idea or not. Let me see if there's anything else here. What was your bad experience with ZFS or ZFS on Linux with OS native ZFS? I don't want to share specific details of customer experiences because if the customers found out that I did that they might not like it very much. But I think in one case in general terms what I can say is that the system just like nothing was working right on that system and I don't really know why. That case is a little ambiguous. I don't think that was ZFS on Linux. I think it was on Solaris. I think the other case I ran across was also on Solaris. In that second case, but I'm not sure, it might have been Linux. In that second case, it was definitely the case that the corruption was from ZFS because it involved changing certain ZFS options and depending on whether you changed those ZFS options or not, you got corruption or not and the kind of corruption that you got, again, without disclosing any details from the specific customer was something that I personally believe there's literally no way that was a PostgreSQL behavior. It was way insainter than that. That's all the questions I see in the IRC channel. So thanks a lot and I hope you got something useful out of the talk. Thank you.
|
PostgreSQL databases can become corrupted for a variety of reasons, including hardware failure, software failure, and user error. In this talk, I’ll talk about some of my experiences with database corruption. In particular, I’ll mention some of the things which seem to be common causes of database corruption, such as procedural errors taking or restoring backups; some of the ways that database corruption most often manifests when it does occur, such as errors indicating inconsistencies between a table and its indexes or a table and its toast table; and a little bit about techniques that I have seen used to repair databases or recover from corruption, including some experiences with pg_resetxlog. This talk will be based mostly on my experiences working with EnterpriseDB customers; I hope that it will be useful to hackers from the point of view of thinking about possible improvements to PostgreSQL, and to end users from the point of view of helping them avoid, diagnose, and cope with corruption.
|
10.5446/52125 (DOI)
|
Hello everyone and welcome to our talk. Today we'll be talking about building an automatic advisor on performance in the tool in PostgreSQL. First let us introduce ourselves briefly. Hi, I'm Tatsuro Yamada and I work for NTT Convoy as a database engineer. I'm a contributor of PostgreSQL. What I contributed recently is progress reporting features such as cluster and analyze command. Also, I'm a committer of Oracle FDW and an organizer member of PGConf Asia. My name is Junyong Huo and I'm a software developer at VMware. I try to contribute as much as I can to possess fuel on its ecosystem in general. I'm the author of HyperPG extension and the co-author of PowaTool on some of the extensions used by this tool. Let's start with the agenda. During this talk, we will be covering the following four sections. First, we'll try to define what is the query plan and some tips on detecting inefficient plans. Then we'll be explaining the usual reasons causing an inefficient plan to be chosen by the PostgreSQL planner. Thirdly, and this section is a highlight of this talk, we'll be introducing nice tools that we created to get more efficient plans with a short demo for each of the tools. And finally, we'll conclude this talk with a summary of its content. I think you all know what a query plan is. But just in case, let's talk shortly about SQL and query plans. SQL is a declarative language, while it means that you only need to specify what is the wanted result set, but not how to obtain it. The retail on how to compute the result set is the RDV-MX job done in query optimizer or query planner, which will choose the best methods to do so. For example, if a user wants to retrieve data from a table, the query does not need to specify a scan method such as index scan or sequential scan. The sum of planner steps to retrieve the wanted data is called a query plan. Some people refer to it as a query execution plan, execution plan, or plan. The query plan is represented by a tree structure and is composed of multiple plan nodes, such as scan, join, and aggregations. It is selected among various other possible plans based on its cost-calculated tarantula plan generation. So, is the query plan chosen by the planner always the best one? Unfortunately, no. For a variety of reasons, an efficient query plan can sometimes be chosen. It can be very problematic as it can result in a very long query execution time. If you want to check a query plan, you can use its plain command. Let's look at an example in efficient plan. This is a typical example of an efficient query plan. You can see here a join being performed using a nested loop, the inner part of the loop, the materialized node over the sequential scan will be executed as many times as there are rows in the outer part of the loop. You can see that there are many sequential scans in the inner nodes of the join, the sequential scan over T1. In other words, the inner part is executed 25,000 times. Here, the dataset is small enough so post-rescancast the dataset in T1 to avoid doing 25,000 sequential scans. But it's still a very inefficient way to perform the join. A good solution here to improve the execution time would be to create an index. We just saw an example where the inefficiency was caused by the lack of index, but what else would bring an inefficient query plan? We will talk about that in the next slide. Now let's focus a little bit on the reason why inefficient plans can be chosen. There are many reasons why it can happen, but we will focus here on the three major reasons why an inefficient plan can be selected. The first one is the indexing issue, the second one is a statistics issue, and the last one is due to the planner specification. Let's take a look at each one. Index deficiency often occurs when you forget to create an index to suit some of your queries. We showed you this example earlier. Then there is a problem that an index actually exists, but for some reason it cannot be used. This problem usually happens because of how you wrote some of your queries. For instance, if you use a function for the column specifying in the where or join clause, then a specific function that index is required for an efficient execution. Another case is when an index exists, but for some reason it isn't used. This can happen, for instance, if the leading columns of a compound index aren't used in the predicate of your query, or if you use different data types that require you to perform a cast preventing your index from being used. The usual solution is to create a dedicated index for each query, but we do not recommend to do that blindly. Indeed, each index you add also adds a amount of spots that increase the right time, so it's better to minimize the number of index created. You can do that either creating combined index to optimize the multiple queries at the same time, or alternatively when possible to rewrite your query so they can use the existing indexes. However, as you can easily imagine, if you have thousands of queries it will be a very difficult one. Let me first briefly explain what are the statistics. Statistics are information computed using a sample of each shuttable data on regular basis. Those information are used when generating a query plan to compute each node's kernel identity and selectivity and come up with an execution cost. The statistics include values data such as the number of distinct values, the common values, and their frequency and so on. As you can imagine, having incorrect statistics will lead to incorrect cost calculation. The following three cases are the most famous examples of inefficient plans being selected due to inaccurate statistics. The first one is auto-vacuum tuning. If there is a problem with auto-vacuum settings, the statistics will be computed less frequently, resulting in a larger difference between the actual data and the statistics. This usually happens when tables are huge or have an increasing number of rows. In such cases, the default thresholds aren't aggressive enough. The solution depends on the workload but should be set in order to run auto-vacuum frequently enough. This can be tuned using auto-vacuum-analyze threshold and auto-vacuum-analyze scale factor. This can be done globally but with strongly advised to change those values on a per-table basis using an alter-table command as we show in the last example. A temporary table is a quite different kind of table as far as statistics are concerned. Indeed, the tables can only be seen by the on-in connection, meaning that auto-vacuum will never be able to compute statistics for them automatically. As a consequence, by default, no statistic will be available, so the default number of rows will be used. This may lead to quite incorrect plan calculation. The usual solution is simply to run manual-analyze on the tables after populating the data. If you perform a large amount of writes in a transaction and then directly execute a query, the planner will probably use the old statistics. Indeed, as long as transaction is opened, auto-vacuum won't be able to see the new data. Also, by default, auto-vacuum will launch only every minute. So even if you committed the transaction, there is no guarantee that auto-vacuum did work up and already computed fresh statistics when you are executing your new queries. The usual solution here is to split the processing in the transaction in the following steps. First write the data, so do the insert, the update, or the delete. Then collect the statistics using analyze. And then you can go on with your usual query execution. What I mean here by planner specification are the values' heuristics used to compute cardinality and selectivity. The most frequent issue with those heuristics is how N-D predicates are computed with POSWES. When you use multiple predicates, the planner will assume that those predicates are independent. This means that each predicates' selectivity will be applied on top of the other ones. This usually works well as most of the time each column is independent. But if your dataset contains correlated columns, for instance a zip code and a city name, as we can see in this example, this can lead to dramatically wrong estimates and possibly dramatically slow query execution. The solution is to take advantage of the extended statistics that were introduced a few years ago in PG-10. It enables cost calculation that consider correlation between columns of the same table and greatly reduce the selectivity AC material in scan and aggregation. Unfortunately, for those extended statistics cannot be used for drone closets. Therefore, if you hit the planning issue due to correlated columns using a drone condition, the only solution to fix it is to use some extensions such as a PG-Hint plan. Here is a quick summary of the causes and solutions. There have been various solutions mentioned, but do you know that there are tools available that can help you in investigating those problems and implementing the solutions? We'll talk about those tools in the next section. In this section, we'll give a demo of two advisor tools we've developed which can help you to get an efficient plan. The first one is PG-QualStat, a tool that can automatically suggest indexes and find correlated columns. I'm the co-author of this tool. The second tool is PG-PlanAdvisor, a tool that optimizes a query plan for queries having many drone conditions. The other is a Tatsuro Yamada. Let's start with PG-QualStats. I will first briefly introduce PG-QualStat, then explain how it can do a global suggestion for indexes with a short demo of the results and finally summarize the various ways this extension can help. PG-QualStat is a puzzle-screwed extension that keeps track of predicate statistics in an efficient way. The statistics include per-predicate selectivity, selectivity estimation errors, whether it was an index scan or a sequential scan, and so on. All those metrics are also correlated to the unique query identifier if PG-stat statement is set up. Using the statistics, it also brings a global index advisor to suggest a minimum number of indexes to optimize as many queries as possible. The index suggestion works in a very simple approach. The first step is to retrieve the list of the interesting predicates for index suggestion. For instance, the predicates that filter at least 50% of the input rows that are executed in a sequential scan. In this example, we retrieve four predicates, two of them being combined predicates. During the second step, we build a complete set of paths. Those includes all possible combinations of predicates that may include other predicates or not. From the first predicates we previously retrieved in the first step, we can see here that seven paths are possible. During the third step, we compute a score for each path. The scoring method use is quite simple. We simply give a way to each predicate in each path corresponding to the number of simple predicates they contain. In this example, we can see that the left path has three predicates. One predicate having a simple predicate, which is t1.id equals something. Another one having two single simple predicates, t1.id equals something and t1.ts equals something. And the last one having three single simple predicates. Therefore, this path has a score of six. The final step simply consists in choosing the highest score path, which gives us the first index to create. The index definition is then generated using this path. The correct column order is preserved from the path, as this can be important, for instance, for B3 indexes. The first column of the index corresponds to the column in the predicate having the lowest score, and all other columns are adding in the ascending weight order in the path. Once the index is generated, all optimized predicates are removed from the list, and we just start again from the first step until all predicates are optimized. Obviously, I presented here a simplified version of the approach. In the real implementation, many other parameters are handled, such as predicates that can be optimized using multiple access methods, or predicates that can't be optimized automatically. Here is a simple demonstration of the global index suggestion. It's very naive and uses a very simple table containing three in columns and a text column without any index. A few rows are inserted, and then a few queries are executed using multiple predicates' combination. The goal here is not to be representative of a real workload, but rather to show what are the possibilities of this approach. The index suggestion is very easy to perform. You simply need to call the PgQuestAtIndexAdvisorStart function. This function has some parameters depending on which predicates you want to consider. Here I use all the default values except for the selectivity, where I also consider all predicates filtering at least 50% of the rows. Here we can see that three indexes are suggested to optimize the queries that are previously run, one of the index being a three-column index. Now we can also see that one of the predicates couldn't be automatically optimized. This is an e-like predicate. Indeed, this operator can be automatically processed as the operator class to use the index. It depends on the value being passed to the predicate, so it requires a DBA attention. As PgQuestAt also sample constants used in the predicate, you can easily find out whether some WorldCard character was used or not, and therefore decide which operator class to use for the index. PgQuestAt is a useful extension that can help you extract knowledge about your production workload. It can help DBA to focus on the complex optimization parts thanks to the global index advisor. But don't forget that this extension also gives a lot of precious information that can be used for other optimization. For example, as it computes the selectivity estimation error, you can find out if some predicates are correlated or if you have outdated statistic issues. From here, I will talk about the following four points. PgProgramAdvisor is a tool that can automatically tune clear plans and can optimize scan methods join methods and join ordering. It has the following advantages. The machine performs the tuning rather than the few ones. Tuning is possible even when there is no expert to do the optimization. For experts, it is possible to see tuning proposal and gain new awareness. Then, it can find an optimal plan. It can control each plan node and find the optimal plan that may never have been selected by a final post-trade. Finally, it has the availability of all plan history in tuning process. It is possible to refer to the plan history in the tuning process and also to reproduce any plan in the history. This tool works well for complex queries with many joins. So I believe it will prove useful in the following use cases. In the system development field, it is useful for plan tuning during performance tests and troubles, especially analytical work load. Then, in your field of study, it is useful to verify the plan node capabilities. You can check the quality and performance of the safety plan when there is no estimation error. How can this tool achieve automatic tuning? As you know, there are multiple steps in post-trade query execution. I added a feedback group with PG-ProAdvisor in some of those steps, which allows it to do three things. 1. Detect estimation errors. 2. Collect an estimation error by using feedback info on next query execution. 3. Record all information such as plan, execution time, and optimize the hints as feedback information. If you use this extension with explain-analyze command, Post-Waste can use a PDC-A cycle to improve cost calculation because it fixes the estimation error on each sub-stay-agent execution of the command. OK, I'll explain how to collect an estimation error on the next slide. For example, let's take a look at the left plan. There is an error in the estimated rows of the first node. If you fix it on the second execution by feedback information, you can expect the plan to be better as it's not relying on correct statistics. I'll show you a short demonstration from the next page. I will show you a demo that optimizes the query plan using the automatic tuning by PG-ProAdvisor and shortens the query execution time. I will use the target query from join-order-printmark. This query, 31cc-curve contains 10 joints and 4 aggregate functions. The size of the dataset is 10GB. The parameters and the environment are as indicated here. The demo has the following operations. I'll skip the detail of the first step. For the second step, I'll execute the 3D file 17 times for auto-plan tuning. This is done to search for an optimal plan. In the third step, I'll check the plan history table. You'll be able to see if the execution time will be shortened. And also how estimation errors will be corrected. Finally, re-verification. I'll represent the execution for two different versions of the same query. One is the original query that has a baseline plan. The other one is the joint query that has an optimal plan forced to use in optimize the hints. Plan tuning will be completed in about 60 seconds for this demo. This number indicates the number of executions of the query. As you can see, the execution time is different in each iteration, as plans are changed every time. Finally, the tuning is completed. I'll show the plan history table on the next page. This table contains ID, query ID, plan ID, the sum of estimation errors of joints and execution time. The leftmost ID is the sequence. The query ID and plan ID are hash value. You can see that the plan ID, the estimation errors, and the execution time changed for each iteration. And in the fourth execution on the last row, you can see that the estimation error is fixed. And drastically reduced query execution time. The tuning was successful in improving performance by a factor of 8. After the tuning process, you can get optimized hints to reproduce the optimized plan. So you can benefit from this optimal plan in other environments. I add these hints to the target query and will show you re-verification results on the next page. Let's take a look at these terminals. The left side is the original query and it has a baseline plan. The right side is the hinted query with optimized plan. You can see the optimized plan is much faster. Let's take a look at the table. The performance improvement is about 8 times. The results prove the effectiveness of the tools. I think that you easily see the effectiveness of automatic tuning of PgPlanAdvisor. This page shows summary of PgPlanAdvisor. Merit 4DVA. PgPlanAdvisor is possible to improve query performance by automatically optimizing the query plan. Also, the optimized plan can be reproduced on other environments. How it works? It optimizes the plan by using a feedback group to improve the cost calculation during each query execution. In the demo, we showed multiple join query performance, improved 8 times as an example. So far we talked about two tools, PgQuestat and PgPlanAdvisor. They are both advising tools with a very different working range. However, they can complete each other for better efficiency. Here is a global scheme of how to benefit from both of the tools. Once you run enough queries to get a significant workload, you can use PgQuestat to see if any index is missing. Of course, you should manually inspect the suggested index before creating them. Then you can use PgStat statement to find long queries and use PgQuestat to filter out the queries that have many joins, bad selectivity estimates, or both. Finally, you can try to optimize those queries using PgPlanAdvisor. This easy approach can lead to significant performance improvements. In the first two parts, we explained query plans, what are inefficient plans, and the most frequent causes for them with clues on how to fix the problem. In the last part, we introduced two advising tools, PgQuestat and PgPlanAdvisor, and showed their usage with a quick demo. We also briefly introduced how to work with them. Both tools are freely available on GitHub, so please give it a try. Any feedback and contribution is very welcome. Finally, advising feature, Addactive query, or Estimate error detection have large merits for DBA, so we want to integrate those features and possess itself. Please feel free to pay attention, stay safe, and use PgQuestat. Now back for Q&A with Julian and Ted Siru. Please go ahead. Hi, so I start with the first question, maybe. So the question was, it was said that PgPlanAdvisor is a PORC. What is the status of PgQuestat? So PgQuestat is a stable, its current version is 2.0.2, and it's used in a multiple-fold production system without any done bugs for now. So I would say it's stable. The second question is, the method behind PgPlanAdvisor was explained, but how is it intended to be used in practice? Would it automatically optimize in the background, or does a DBA need to manually intervene? So for now, the current status of PgPlanAdvisor is to be used manually. So you have to detect, like, choose which queries could benefit from automatic optimization and then launch the optimization process manually in an interactive section. But as we explained in the talk, we have some idea on how to automatically select the interesting queries that could potentially benefit from PgPlanAdvisor. And we also have planned to integrate it with a tool called Powa, where you could use some option to say, oh, try to find this and this query and do it in the background and present me the result as soon as you are finished optimizing them and show me a summary of what it did and how it looks like, like, better, the same or not, and a ratio for optimization. But this is a long-term plan, and for now, you can only use it in an interactive way. Okay. Thank you. I think that's all the questions you have. Thank you for coming back and answering the questions, and thank you for your talk. I appreciate it. Thank you very much, Dan, and thanks everyone for watching. Thank you. Okay. Thank you. Goodbye.
|
PostgreSQL is a mature and robust RDBMS since it has 30 years of history. Over the year, its query optimizer has been enhanced and usually produces good query plans. However, can it always come up with good query plans? The optimization process has to use some assumptions to produce plans fast enough. Some of those assumptions are relatively easy to check (e.g. statistics are up-to-date), some harder (e.g. correct indexes are created), and some nearly impossible (e.g. making sure that the statistic samples are representative enough even for skewed data repartition). For now, given those various caveats, DBA sometimes can't always realize easily that they miss a chance to get a meaningful performance improvement. To help DBA to get a truly good query plan, we'll present below some tools that can help to fix some of those problems by providing a missing index adviser, looking for extended statistics to create, and row estimation error correction information to get appropriate join orders with join methods automatically. - pg_qualstats: provides a new index and extended statistics suggestions to gather many predicate statistics on the production workload. - pg_plan_advsr: provides alternative good query plans automatically to analyze iterative query executions information to fix estimation rows error. In this talk, we will explain how those tools work under the hood and see what can be done, how they can work together. Also, we will mention what other tools also exist for related problems. Therefore, it will be useful for DBA who are interested in improving query performance or want to check whether current settings of indexes and statistics are adequate.
|
10.5446/52128 (DOI)
|
Hi, my name is Alexander Kratkov and today I will talk about sharding. The plan is following. At first I will discuss database carrying options at all, then I will describe existing sharding solutions and then built-in sharding options using core features such as partitioning and foreign data wrappers. Then I will talk about shardman extension developed by POSGRESS PRO company. Then I will talk about sharding challenges which requires core modifications such as distributed visibility and distributed query execution. And then I will show you the plan about pushing the switches to POSGRESS Core. So what is scaling at all? What is scaling in general? In general database scaling is the ability to increase database performance by using more hardware resources. So basically you have some hardware resources and you have some performance in your application and you want your database to behave faster, to have higher throughput. And then you just improve your hardware. So this hardware can include various resources such as storage, buying more disks, better sun, buying more memory, more CPUs, buying more servers using multiple machines, using better faster network and so on. But due to concurrency scaling is a challenge especially for transactional systems because transactional system is a system where separate parts, it could be separate parts which acts independently because in transactional system every transaction could conflict with another transaction and database should resolve all the conflicts. This is why scaling is a big challenge. So the first option for scaling is so called vertical scaling which is basically scaling within single server. So one can improve performance by upgrading server or replacing it with better one and eventually your server should have more or faster storage, more or faster memory, more or faster CPUs and so on. But vertical scaling is also a challenge because database server code can have some bottlenecks of them just adding more resources can don't lead to some improvement of performance because it could be concurrency for some logs and so on. And the Postgres is constantly improving for better vertical scalability. But even if your database management system is perfect for vertical scalability, this is impossible but if you imagine that your database management system perfectly scales vertically but then vertical scalability is still limited because there is a maximum amount of CPU which you can have in single server, maximum amount of memory and so on. And your database can grow indefinitely and at some point vertical scale wouldn't fit your needs. And then you should think about horizontal scaling, so called horizontal scaling. There is a horizontal scaling could be also made, some horizontal scaling is logical, is a replication, is a physical or logical, but it's also limited because each server have to store the full copy of the data. This is why we are going to skip the replication option because I believe everybody is aware of replication and let's immediately start with Sharding. Idea of Sharding is to distribute data among multiple servers using so called Sharding key. So ideas that you know if you have, for instance, if you have to serve, for instance, many users so you can get user ID and use it as Sharding key. That means that particular user ID is mapped to given server. So you know, for instance, you know user ID which you need to get and you can calculate which server you should go to. And the Sharding is the only solution which can reduce IO on the single server because the data is split across multiple server, but Sharding benefits are only possible when you have Shardable workload. So if we talk about my example about users, so if you Shard by user ID, this Sharding would be good if your typical query only touch single user. But if your typical query touch all the users, then this Sharding scheme is not good for your workload. So you should choose Sharding key in the respect of your workload. Yes. And Sharding key should spread the data as even as possible. Using Sharding layout is a challenge because if you, for example, if you want to change Shard key from, for instance, user ID to, I don't know, imagine, order time, you know, it's complete reorganization of your Sharding and this can cause downtime. If you just want to add new server and move part of users there, then it could be implemented with minimal possible downtime, but that's also a challenge. Yes. And another thing is reliability. So for instance, you have, you know, each of server have some possibility of failure. And if you have single server, this is one possibility. When you have many servers, the possibility is that one server will go down is much more higher. This is why you might need additional standby servers to keep your reliability the same as single server without Sharding. There are some existing Sharding solution. You can do Sharding in your application. So the basic idea is that application should know the Sharding scheme, it should know the Sharding key, it should know how Sharding key map is mapped to the servers. And thanks to that application can immediately go to the server where required data is located. So it's probably the easiest scheme for database management system side because it's required no work for database management system, but that requires application modification. Another thing is PL proxy. PL proxy is extension which can route stored procedures calls to particular servers. But PL proxy also requires a lot of manual work. There is a document in the internet from Instagram how Instagram implements Sharding on top of POTG using PL proxy. There are POTG's forks, POTG's 6C and POTG's 6L. And the strong point on POTG's 6C and POTG's 6L is that they implement distributed query planning and execution in quite general. But the weak point is that POTG's 6C and XL are heavily modified POTG's forks. There is Cytus extension for Sharding which is POTG's extension, but there is also heavily modified extension so basically it's implemented its own distributed planner and executor inside the extension. And there are some other options not POTG's based for instance Hadoop and many other different Sharding options. But now in Postgres we actually have built-in Sharding using core features such as partitioning and foreign data wrappers. So in Postgres you have foreign data wrappers. Using foreign data wrappers you can say that this table is located on some foreign server and every time you access that table you actually access a different server. And in Postgres you have partitioning. So partitioning table looks like a single table but its data is stored in multiple tables. And there is partitioning K which actually works like Sharding K and partitioning table actually routes your accesses to appropriate partition. And the trick is that you can define your partitions as foreign table and this effectively gives you some Sharding solution. So you can create one or many of coordinators where you define partitioning tables and where you wrote each partition as a foreign table and you have data nodes, data Postgres instances where you store the partitions themselves. And built-in Sharding already have useful features such as join pushdown. Join pushdown means that if join can be calculated locally on Shard it would be pushed down to that Shard and that Shard will do the work. Sort pushdown, aggregate pushdown. This is very cool but there are also pinging features such as parallel Shard access. Multiple Shard access means that Shards should execute the parts of query in parallel. So right now if your query have to access multiple Shards it would at first execute part of query on Shard number one then wait the query to complete, fetch results from it then access Shard number two then access Shard number three and so on. So everything is going sequentially and sequential access doesn't actually affect your throughput but sequential access is bad for your latency but if you have parallel Shard access then you can execute some for example huge all up queries very fast. And another pinging feature is replicated tables so there might be some tables, some so called dictionary which are small but frequently accessed and it's reasonable to have them replicated on each of Shards. So then each Shard can implement join with dictionary table locally. And regarding built-in Sharding the problem with partitioning and FDW is that it requires you a lot of manual work so you have to manually define your partitioning table, foreign table then you should every Shard you should create of the table and manage this but this is why we create so called Shardman extension. Shardman extension estimates the management of Sharding so you can just call few SQL function and this SQL function will create your Sharding cluster. And Shardman provides redundancy and automatic file over using streaming application. It also provides distributed transaction using 2PC, distributed visibility and distributed query planning and execution. So this is scheme how Shardman implements redundancy. And so for instance you have 4 servers and 4 Shards and then each Postgres instance will be replicated to 2 neighbors. For instance you can configure this number of replicas, number of servers, number of Shards. And in this example each server have 3 Postgres instances working on it. Yes and we are using streaming replication because streaming replication is already major and rock solid but we want to eventually move to logical replication because in streaming replication you have to reserve some resources on each instance such resources such Shard buffers, processes and so on. And we would like to eventually move to logical replication but there are some set of lucky features in logical replication which we want to have in order to switch to logical replication. At first it's logical PgRewind which can help you to return your old primary server as a replica in the case of Hellower. Then logical replication apply is single threaded yet. So the issue is that for faster replication we need parallel decoding and parallel apply. Another thing is that we use 2 phase commit and we need logical decoding of 2PC. So 2PC transactions should be replicated using logical replication. Another thing is that logical replication spreads transaction to the replica only once it's completed. But if you have large transactions that cause very high latency and in order to don't have such high latency we need online streaming of large transactions so that transactions should be streamed before it's completion. Another thing is high availability. Built in high availability probably we should implement RAV protocol on top of logical replication and for sure we need full DDL support in logical replication. We have implementation of distributed visibility sharding. It's based on clocksi protocol. You can read the paper of clocksi. Basically the clocksi is the favorite in modern sharding implementation of distributed visibility. It needs PostgreSQL core patching. Ideally it needs to be implemented on top of TSN patch. It provides good scalability so only nodes which are involved in transaction should be involved in snapshot management. Local transactions can run locally and node dedicated service is required for managing distributed snapshots. And the downside of this approach is that a short log for some of readers during distributed CSN coordination might happen. So in some rare situation readers can have to wait. But this doesn't seem to be a problem in practice. So this is how distributed visibility is working. So this is basically 2PC with special pre-commit stage. And pre-commit stage make transaction in doped in and calculate distributed CSN for this transaction. And then on the commit stage this transaction is marked with distributed CSN, the same TSN on all the nodes. And it blocks readers but only if they access the data modified by currently in doped transaction. And only if it acquires a snapshot after this in doped transaction enters in doped stage. And so it's really, really rare cases when readers are blocked. We already have some tries to push distributed visibility to Postgres core but we did it kind of wrong way because we made a patch which implements CIPI which overrides low-level visibility relation, related function and we have PGTS DTM extension which implements CSN internally and implements CLOCKSI on top of that. The right way to implement distributed visibility in Postgres core would be CSN snapshots and then we will have CAPI for management of transactions CSNs for management of snapshots CSNs and using that CAPI we can have CLOCKSI implementation. Maybe it would be done as extension or maybe in core. The next question is how does Sharnman plans and executes distributed all-up queries. Idea is that we build distributed plan on the base on local plan. So imagine you have some local plan when you have two partitioning tables and you have to join them. So plan is to scan the one relation, the whole of one relation, the whole another relation and then join them so it's trivial. And let's make it a distributed plan. So to make it a distributed plan we add special nodes such as shuffle nodes and gather nodes. So ideal shuffle nodes is to get the result of nodes in the bottom and put it to appropriate server to execute the nodes on top. The gather nodes is gathering results from multiple servers and puts them together. How this distributed plan is executed. When we have two Postgres instances there is instance one which have table A1 and B1 and the second instance which have table A2 and B2. And then each Postgres instance have copy of distributed plan but it only executes the plan for tables which it have present. So instance one scans A1 and B1 and instance two scans A2 and B2. And then shuffle nodes are redistributing results by the join key. So it calculates hash on join key and redistributes tuples to the appropriate server among these two instances. Then part of joins are performed locally and then gather nodes. Node puts results together and finally you get your result of table join. So the distributed query execution have following steps. At first prepare distributed query plan at the coordinator node then it provides makes portable serialization and collects list of foreign servers. At the beginning of query execution it's pass plan to each of foreign server using foreign data wrapper connection. After that each foreign server localizes the plan by removing tables which doesn't exist on the server and finally all the plans are executed on all the servers and the result is gathered. And this work is done as extension and it's implemented using planner hook, set real pass list hook and set real join pass list hook. It implements custom node so called exchange plan node which implements shuffle and gather and also portable plan serialization which requires core patching. Yes so set real pass list hook at distributed node on top of that and set real join pass list hook also at distributed nodes dependent on type of join. So for nested loop join we basically have to broadcast for hash join we need shuffle and gather. So what does exchange plan node do? So it computes destinations instance for each incoming table, transfers the table to the corresponding exchange nodes at the instance. If destination is this node itself, transfer table up to the plan tree and distributed plan has exchange node gathered on the top of the plan which collects all the results at the coordinator. Yes and exchange plan node have few modes including shuffle which transfers table to corresponding to distribution functions, gather nodes which gather all the tables on one node and broadcast which transfers each table to each node. Portable plan serialization and deserialization is implemented as a patch to node to string constraint to node code and it replaces OITs with object names. So each instance can have its own OITs so we can't keep all the OITs to be equal among the your distributed cluster and this is why we have to replace OITs with object names and deserialization replaces object names back to OIT and we implement our own function pgexec plan which deserializes, localizes and launches execution of the plan. And another postgres code modification it changes the partitioning code of the planner so that partitioning of join rel can be changed in the according to the path so it could be changed in the hook that extension is working. Maybe we can transfer partition relation field from rel option info to path structure which will improve the extendability of the planner. So what is current status of distributed planning execution? So it's work and progress, it needs patch to postgres core, currently hash join and nested loop join and hash aggregates are implemented while merge join and group aggregate are on our to-do list. We observe about five time improvements in comparison to just execution on foreign data wrapper on four nodes cluster. And you can go to postgres progithub, shardman projects to try it. So the conclusion is the following, then we need following features to be pushed into postgres core in order to make shardman a pure extension. And first this is CSN patch, then distributed visibility CIPI based on the CSN patch, portable serialization and deserialization for planner nodes, planner improvement for partitioning and a lot of logical replication improvement to switch shardman from streaming replication to logical replication. So that's all I'd like to tell you today about sharding. So this is my first experience online, online pre-recording talk. I don't know how it would be, but I like to hear your questions and answer them. Thank you very much. Okay, we're now live. Please go ahead and answer your questions. Yes. Hello. I see the activity on ISE channel. And the first question is, do you have a recipe for those traits? So I actually don't really understand what particular this question relates to. It probably relates to the set of problems in logical replication. And yes, we have some patches for logical replication to address part of problems, not all the problems of logical replication which prevents logical replication from being used in shardman. But we have patches for some of them and we will try to push those patches to progress 14. Were there any other questions? No. Not really, there are comments. Not questions. If you want to respond to them, you're welcome to. Peter comments sounds good. Keep sending patches. Thank you. Okay. I got the meaning of the first question. It was not about technical, it was about picture on the last slide where we have slices of cake. And the question was whether I have a recipe for them. Unfortunately not. This is just picture from the internet. I don't know how to cook it. Sorry. Any other questions? So then I see the only channel where we got the question questions. Yes, it was just I or say, yes, just I see. All right. If that's all the questions. Yes, so. Okay. Thank you. Appreciate your help. Thank you. Goodbye. Bye bye. Bye bye. Bye.
|
Sharding is one of most wanted PostgreSQL features. Vertical scalability is limited by hardware. Replication doesn't provide scalability in read-write performance and database size. Only sharding promises this. However, such a brilliant feature is hard to implement. And here is the point where different community parties should work together. The PostgreSQL community has done a great work on foreign data wrappers and continues improving them. Postgres Pro has experience in distributed transactions, snapshots and query planning/execution. In this talk we will cover existing advances in sharding and present a roadmap of transforming them into a comprehensive sharding solution suitable for the main use cases.
|
10.5446/52132 (DOI)
|
Hi everyone, this is a talk index do it yourself and my name is Andrey Baradin. I am working for Yandex Cloud and I am really glad you find the time and you are watching this video. At Yandex we have a lot of Postgres and many Yandex services like Yandex Mail, Yandex Taxi, Yandex Mail and others live in Yandex Cloud, managed by Portalisk clusters and we have total of about a little more than 2 petabytes of Postgres which generate about 3 million requests per second. A few words about me, I am contributing to Postgres since 2016. I am working on Postgres clusters at Yandex Cloud, I am working on disaster recovery system while G, connection puller Odyssey and actually personally I am most interested in indexing and that is how I got into Postgres and that is what excites me the most. And today we are going to talk about what is AccessMethod and why you want to create one and how to do this and I will try to give you some ideas for hacking. A few years ago Aleksandr Karatkov made a beautiful presentation about Postgres extensibility in a way how to create access methods and this was presentation aimed at core developers how to make Postgres extensibility better. I have tried to get everything from this presentation, make it very, very, very simple and describe how to start with your own access method, how to try to implement your ideas. And first we should talk about what is AccessMethod. When you have a data type, what is data type? Data type is basically two functions, which describe how to interpret data in disk format in function and out function, which describe how to interpret data from disk back to user value. AccessMethod is in turn an idea how to search within data. So for example B3 is idea of searching within sorted objects. Gist is idea of searching from generic object description to specific object description to specific object. And gin is idea of searching big object by small object part. And hash is idea of searching object within a small subset defined by the same hash function. So AccessMethod is the idea how to search and idea implemented in C functions. If you want AccessMethod to combine with a data type, you have to define operator class. Operator class is a C function, which describe how idea applies to on disk data type. And then when you have a table, you can define stable expression over table and combining it with operator class, create an index. And then when you have a query in your database, optimizer will ask index, which in turn will invoke AccessMethod function, how costly it would be to execute such a search. AccessMethod and index will answer that it will cost us something like milliseconds. And optimizer will, this planner will decide whether to call this AccessMethod or that AccessMethod or do the sequential scan. Just scan through all the data. When index can be used, when you execute a query with where operator over here, expression may be a column or something stable, some functions called over column values. That expression, but where clause can be implicit in a join, for example, or in a condition of aggregation. Also some indexes can return sorted data, for example, B3 can return sorted data. And some indexes can return data sorted by operator against some value. This is called, for example, nearest neighbor search. This is a schema from Alexander's presentation. So AccessMethod creates index which helps us to find double identifiers inside the heap via index scan. That's it. So AccessMethod is set of functions necessary to create index, while index is something that provides us double identifiers according to some search criteria. Why would you want to create your own AccessMethod? Most of the time this is due to some research projects. Well, oh, most of currently existing AccessMethods were created in research projects. Maybe they have some ancient ideas, but they are forced to systematize and to grab things together were done in the scientific academic projects. And when you are creating your idea how to search within the data, you should remember that when you are implementing your search idea as a Postgres extension, it's rock hard, it's real, it's working. But it's a little bit harder to compete with those who create a simpler proof of concept implementation because Postgres makes you to think about all the small important details. And it's harder to create good numbers in benchmarks because basically you can't lie anywhere. If you created this index and you can give someone extension, it means it works. Anyone with data in Postgres can reproduce your numbers quite easily. And so you may want to create LossyIndex. What would that mean? Index in SQL standard says that result of a query do not depend on existence of any indexes. So when you run a query with index or without index, you get the same result. So indexes are exact, always turning the same result of tuple identifiers. But sometimes you want to do approximation of a search or you want to trade off some correctness for performance. You may want to create an index which is executing search like find me something within 10 milliseconds, then give up. You could create it on higher level of obstructions, but here it can also have its own meaning, too. And also one of important points here is motivation to learn. This is how I got to hacking on Postgres. I had a few ideas which I wanted to implement in gist, generalized search trees. And I started doing this and then I wanted to show it to others who use gist. It really motivates to learn a lot. Finally, I'm working on Postgres and not only on indexes. Most of my day job is not about indexes, but still indexes and research in indexes is what drive me through learning code base of Postgres, which is well documented, but it's complex. So you actually need motivation to learn it good. Sometimes you want to make something more specific. So most of our indexes are very abstract and can be used in many, many, many, many different ways. But there are some needs and needs. And so if you want something specific for your data type, for your search, for your workload, you may want to create an index as in core, but in an extension. And finally, you may want to try something from commitFest. You see some nice build, some nice data type, some nice search, some nice looking enhancement on the commitFest and want to try it. And how safe is it? Basically, if you apply patch which was not reviewed to your production server, so you are really, you really like to risk, something may go wrong. What can go wrong if you experiment in index as extension? Most expected fail is when something falls, something crashes during the execution of a search. And it's not a big deal. Just drop your extension and your searches will go through regular indexes. Or when your extension fails through writeHeadlogging.replay, then your standby will stop applying writeHeadlogging and they will accumulate lag behind primary instance. And also, when your Postmaster goes down, because for example, your access method failed during critical section and all database shutdowns for a moment, but in the moment, you will be reconnected to Postgres. Not a big deal, just drop that extension first if it has some error. And finally, when your extension stops vacuuming, it fails during vacuum, it prevents vacuum from running and eventually, if you don't have good monitoring of potential wraparound, you can have a big downtime, but it's very unlikely. So index as extensions will not corrupt your data and probably will not delay your production. Trade off, you can try something or you are absolutely safe to do something in development and environment. What you have to do to implement index as extension? You have to create empty extension, define and index handler a function which will turn pointers to other functions which you need to do searches. You have to implement data insertion and data out like index scan or scan history data. And currently, you need to implement vacuum. And if you want to see your extension on replicas, you also have to have a wall, right? And all this done sounds like 20 minutes adventure. Maybe much more, but let's try to do this in code. So let's fork GIST out of a core torque extension. Here I have Postgres sources with pretty recent commit. And let's go to country and create my GIST. Let's go to my GIST and we need to copy make file out of a bloom index. The control file and SQL part. Also we need sources of GIST itself. It's in sources, backend, access, GIST. We need pretty much everything written on C. So we need to include files, access, GIST, everything about GIST. Let's see what we have. Here is our new extension. We need to rename everything here. And that's it. Just replace everything. This is not necessary. Here we have expert of bloom handler function. We need to rename it to my GIST handler and inmake file. We need to get every part of actual GIST. Let's try to compile what we have. And make install. One more thing that we need. We need actually GIST module magic to declare a new module. And it's no more GIST handler. It's my GIST handler. And also we need to mark it. Let's get main information from bloom. Where is handler? This is it. And BG module magic. Well, make install. Create new database. Start it. Start connected. And create extension my GIST. It works. But we need Elsa support for some operator type, operator class. I think the easiest way is to create support for cube. We need to install it also. Make install. Boom. It's here. Let's go to cube. And cube SQL. We see it's operator class for cube for GIST. But now it's operator class for my GIST. Yep, and that's it. Not very hard to go. Let's embed it into our extension. My GIST make install. Now drop extension my GIST. And create extension my GIST again. Create table T, select cube random from generate series 1000. 1000 of elements inserted to a table. Create index on T using my GIST. Probably call it my cube. Oh, our new GIST works. Select everything from T where cube inside cube, cube 101. It works somehow and checks that it selects our new index. It uses our for cube GIST. That's it. Cool. Works. So, while this example works fine, there is one important thing missing. It's right ahead login. If you want your index will be able to survive crash accidental stop of postmaster. If you want to observe your index on standby, you have to implement right ahead login. If you just copy paste implementation of right ahead login from the core, your replay functions will be calling regular GIST functions. And they will end up constructing something that may be not compatible with things that you changed inside your index as extension. So, you have to use generic right ahead login. This is quite simple change. So, anywhere where you are going to modify your data on a page, you are not using regular buffer get page. You are using generic log register buffer. What does it mean? You tell a generic right ahead login that you are going to change this buffer. Please find what have changed and write this to X-Log. Why is it done this way? Extension cannot register its own resource manager. Resource manager is a set of right ahead login, right ahead functions. And because creation of extension is also right ahead login. So, you have to use generic right ahead login, which will reconstruct data of your index, even if binaries of your index are not present on standby replic. You don't have to call specific log functions like this is done currently in any other access method. For example, here is a diff for updating split of pages in gist, where we had a whole function to log or change it, not split, delete of items on a page. And we just exchange it with this function with calling generic X-Log full image. Full image is costly, you better register your buffer before doing change. And when you call X-Log finish, generic X-Log will write what actually changed there. Also one important caveat is your contract with VACU. For prototypes you can avoid implementing VACU at all, but you have to do not return tuple identifiers, which can't be found in a heap anymore. So, minimal heap is standing with this contract, when you are just removing tuple IDs from the index that do not exist in a heap anymore. There is a minimal example of index as extension within Postgres source repository. It's called dummyindex. It is extension which looks as an index. It can run actually any searches, but it has all the infrastructure to create index drop index, give some properties of an index etc. Minimal functional example is also within Postgres source tree. You can find country bloom, which is a bloom index. It is not very practical. It was designed to create extension, which have actual search and index as extension. And more practical example is RAM access method. Like, GIN is searching object by its part. And RAM is GIN designed most specifically for text search, and RAM executes more faster sorting according to relevance. So, it's doing more efficient rankings than sorting results of GIN index search. Also, one important caveat is how do you express your search criteria. Postgres is not very good in passing a lot of different criteria in the single index scan. Here we see that scans through regular B3 by criteria that we want key equal to one value and key equal to another value goes just through two different bitmap index scans. To mitigate this, we usually use a trick which can be called query data type. When you create a specific data type, which describes what exactly you are going to search. We see that this type is, for example, text search query, which is not atomic at all. It contains a lot of information about your information. So, you can, if you want to have a criteria specific for your data, which combines a lot of different idea of searching executed through single index scan, you can create specific data type which describes what are you looking for in your access method. Let's proceed to ideas. So, what we did you with ability to create index access extension. I started with making a demo for my patches on commitFest. So, when I'm doing something new for core GIST, I'm also working the same thing in advanced generalized search, AGS. AGS is basically for core GIST, where I try to update most of my patches, which I want to see in core. And this works for showing some benchmarks, for trying on different architectures. And I just like the idea of having my index on my Github. A few years ago, we have had a learning index paper from Google, where they propose machine learning for search. Basically, they propose doing machine learning dictionary from key to position in a sort of array. Well, if we had a sort of array, it's already an access method, sort of keys. And it works no less efficient than B3. But B3 provides a lot more. OLTP indexes also provide concurrency. You have a lot of inserters in B3. You can have a lot of inserters. B3 provides predicate locking. And B3 provides integration with VACOM, and write ahead logging, and many, many other things. Well, all these things were not described in the learning indexes. So I don't think it's possible to adapt the work of learning indexes to index as extension as for now. But the idea that you can make faster binary search is worth doing, and it has some implications, which we will talk about right now. Most of core indexes are generalized indexes. So this is the same index for charts, for 32-bit integers, for variable characters. And what if you have just primary key with natural numbers? When you search within B3, you always execute binary search in keys on each page of B3. But if you have 200 tuples, and you know that first tuple is 0, and the last tuple is 199, then it's not a big deal to find tuple, for example, 88, because it will be in the 88th slot. So you can employ Newton's search, and you can interpolate position of tuple within a page, and it will save you a lot of cache lines touched in shared buffers. If you do an index specifically for increasing integers. Also, most of indexes contain a lot of obstruction layers, which could be avoided in specialized indexes. For example, here you see that B3Compare calls function of operator class through scan key function, compare function from scan key. And actually it's not zero cost obstruction, it costs you pushing bytes to call stack. What if, for example, POSGIS had their own index? They could have, for example, geometry in gist live pages, and they could avoid rechecking after tuples fetched from index. Also, they could have recently implemented by me the torture certain build, but it was not my idea, just my implementation. But what it would cost them? It would be more cost to maintain, because gist in a core is advancing, and they would have to backpatch advancing of gist from a core to their own index extension. Also, we could make binary search better even with the same level of obstruction. For example, here we see code for binary search in B3, it's doing comparison for middle key, and then changes the range of a search. We can invoke built-in prefetch for both items, which are potential candidates of the next middle element of a search, thus, while we are doing BT compare, memory controller is already busy with doing cache prefetch. And this saves a few cycles for CPU, and I will also add here a reference on Hacker's discussion for this. Also, you don't have to order tuples on a B3 page in increasing order. You should better place them so that if they are accessed together. For example, here if you are going to find tuple number 1, you will go through tuple 8, tuple 4, tuple 2, and tuple 1, and if we place tuple 8, tuple 4, tuple 2 together, they will probably share some parts of a cache line, thus saving us CPU cycles. This is what is called Itemger layout. I will place this scientific paper with slides, and it could enhance search within B3 if the B3 is in buffers. But forking of B3 is not that easy. Why we were able to fork GIST so easily? Because it was contained in just two folders, all the code of GIST. But code of B3 is interleaving with all other gods of POSGRAS. You can just try to search this by checking from other places that the code is actually dealing with B3 and no other indexes. It's a big amount of work to fork B3 and yet a bigger amount of work to maintain B3 because B3 is now actively developing thanks to B3. But you have some similar problems with caches in HASH index, but you still have GIST, SPGIST, GIN, BLOOM and Breen indexes to experiment with. Also, if we could fork B3, we could implement something similar to logstructured merge tree, which is optimized, right optimized B3. When we have a small B3 for current insertions, which is currently in cache, and then we are merging and merging these B3s to make less trees to execute scans at. Well, finally, now you know what are the COVIDs and how to fork extension from core to your extension. And remember that in indexes it's more a technique for learning and trying your ideas. Indexes do not always make queries run faster, especially DIY indexes. Research and research is not always yielding good results, bad result is also a result. But it's always an exciting adventure. Thanks for watching. I will be more than happy to answer your questions at the PgCon, or later if you wish to contact me via email or telegram. Thank you very much and see you later. you
|
In Postgres, we already have the infrastructure for building index-as-extension, but there are not so many such extensions to date. But there are so many discussions of on how to make core indexes better. This is a talk about extracting index from core to extension and what can be done with usual indexes. Some of these optimizations are discussed in @hackers and can be expected in the core, others will never be more than extension. We will discuss ideas from academic researches and corresponding industrial response from developers, communities, and companies. There will be a short live-coding session on creating a DIY index in Postgres. I'll show how to extract access method from core to extension in 5 minutes and talk about ideas for enhancing indexes: learned indexes, removing opclasses in favour of specialized indexes, cache prefetches, advanced generalized search (GiST alternative) and some others.
|
10.5446/52134 (DOI)
|
Good morning, everyone. Thank you for coming to this session. Today my topic is hacking the query planner again. Before getting started, I'd like to introduce myself a little bit. My name is Richard Guo and I'm from Beijing, China. I'm now working at VMware on Green Plum. This is the agenda of my talk. First, I will introduce a little bit about what does planner do. And then the second part is about the different phases of planning. So let's get started. This is the overall backend structure. First, we have the password to determine the semantic meaning of our query stream. And then the rewriters to perform view and raw expansion. Next is the planner who designs an execution plan for the query. And last is the executor to run the plan. So what planner does is to find a correct execution plan that has a low VSA cost for gaming query. Actually, for gaming query, it can be executed in many different ways. Some may be faster and some may be slower. So if it is computationally feasible, the planner would examine each of these possible ways. Each way is represented by path data structure. And at the end of the, and at the end, the planner would select the cheapest path and it can be added to a full-fledged plan. So now let's move on to the second part, different phases of planning. Although the planning process can be divided into four phases, the first phase is pre-processing. At this phase, we simplify the query if possible. Also, we collect the information, such as the join ordering restrictions. And the second part is is scan-join planning, which is to decide how to implement from and where paths are required. And the third phase is poster scan-join planning, which is to deal with plan steps that are not scans or joints, such as aggregation, distinct or group by. And the last phase is poster processing, which is to convert the results into form that the executor wants. Okay, now let's go through each phase one by one. At early pre-processing, we try to simplify the query by several kinds of path-three transformations, including simplified scalar expressions, in line simple SQL functions, and simplified join tree. So to simplify scalar expressions, we reduce any recognizably constant sub-expressions of the given expression tree. So for example, for function calls, if the function is strict and has any constant null inputs, actually we can reduce it to a constant null. And if the function is immutable and has all constant inputs, then we can hand it to the executor to execute. So for building expressions, we can do such simplification as reduce x or true to constant true, and reduce x and false to constant false. Our assumption here is that the sub-expression x, we are not having important side effects. For key expressions, we also can do simplification if there are constant condition clauses, such as for this expression, we return x plus 1, not the error message. So while we are simplifying, by simplifying, we can do computations only once, not once per row. And also we can exploit constant footing opportunities, exposed by real expansion and the single function in line. Expand simple single functions in line. Here is an example. We have a function increase4 defined as argument plus 2 plus 2. So for this query, select increase4a from full. We inline the function and transform it to select a plus 4 from full. So please note that we also perform the constant footing within the function, right? We can read 2 plus 2 to 4. So by inlining single functions, we can avoid the rather high per call overhead of the single functions. And also we can expose opportunities for constant footing within the function expression. And simplify joint trade. There are several cases that we can transform the joint trade into a more efficient form. But in this step, we don't have any statistics in for to use. So we do the transformation based on some predefined rules. For any and exists sublinks in VR and join on clauses, we can try to transform them into semi-joints. And for subqueries in the joint trade, we can try to merge them into parent query. We also try to reduce joint strengths by reducing other joints to inner joints or anti joints. So here is a query with exists sublink. The query is select stuff from full. We are exists select one from bar. We are 4.8 equals 8.6. Actually, we can transform it into semi-joints since it's correlated with parent query. And also we are able to discard is target list. So it can be transformed to query select stuff from full semi-joint bar 4.8 equals 5.6. And here is a joint trade before the transformation. As we can see, the subselects exists as a real core clause in this joint trade. And so this is a joint trade after the transformation. And the sublink has become a semi-joint. Okay, here is another query containing a sub query. So the query is select stuff from full join select bar.c from bar join bar on true as sub on 4.8 equals sub.c. So actually, this sub query is simple enough that we can pull it up into the parent query. So it can be changed to this query select stuff from full join bar join bar on true on 4.8 equals 5.c. Note that for the transformer query, actually we can join the three base tables full bar and bars in any order. And considering that there is a joint clause between full and bar. So most likely joining them first would give us a better plan. But for the original and transformed query, actually we will have two join bar and bars first if we don't do the plow. So here is a joint trade before the transformation. And as we can see, the sub query exists in the range table list of the parent query. And it would be planned independently if we don't do the plow. And now this is a joint trade after the transformation. And the sub query has been merged into the parent query. So by pulling up sub queries, we may produce a better plan as we consider it as part of the entire plan search space. Otherwise, the sub query would be planned independently and traded as a black box during planning of the outer query. Or reduce joint strength. Okay, let's look at this example first. So here is a query. The query is select something from full left join bar on something where bar point D equals 42. Actually we know the equal operator in where is streaks. So for any row where the left join failed in nonce for bar column, the streaks operator will always retain now causing the outer wear to fail. Therefore, there is no need for the join to produce the now extended rows in the first place. So which makes it a plain inner join, not an outer join. So we can conclude that if there is a strict call about the outer join that constraints a var from the non-nullable side of the join to be known. Now, then we can reduce the outer join to inner join. And now let's look at another example. So here is another query and the query is select stuff from full left join bar on full point A equals bar point C where bar point C is known. Okay, for this query, actually we know the join clause full point A equals bar point C is streaks for bar point C, right? So only now extended rows can pass the upper wear. So the bar point C is known. And we can conclude that what the query is really specifying is the anti-joint. So that is to say if the out joins all in calls are streaks for any nullable var that was for the null by higher call levels, then we can reduce the outer join to anti-joints. So until now what we are talking about is all about the pass three transformations. And at a later pre-processing, we would distribute all kinds of call clauses and build the equivalence classes, gather information about the join ordering restrictions, remove useless joins and so on. So first let's look how we distribute the call clauses. In general, we want to use each call as low as possible join level. When dealing with inner joins only, we can push a call down to its natural semantic level. So here by natural semantic level, I mean the level associated with just the base real user in this call. However, when dealing with out joins, a call may be delayed and cannot be pushed down to its natural semantic level. And for this kind of outer join delayed calls, we mark them with required real IDs, which includes all the required reals in the outer join. So by pretending that the call references all the reals required to form the outer join, actually we prevent it from being evaluated below the outer join. And usually there are two cases that our call can be out of the join delay. And so for the first case, let's look at this query. So the query is see let's start from full left join bar on 4.8 equals 42. So it's a left join, right? And it has a join call, 4.8 equals 42. The join call mentions the nullable side real, which is full. And if we push this call down below the outer join, then it will become a filter on full. And it will remove all the rules that it is not equal to 42 before the join. So as a result, we might lose some non-extended rules that should have been in the final result set. So we can lose that. And out joins all join on calls mentioning nullable side reals cannot be pushed down below the outer join. And for the second case, let's look at this query. So the query is see let's start from full left join bar on 4.8 equals 8.4. Where call less 8.4 when equals 42. Okay, it's a left join. And there is a call about this outer join, which is call less 8.4 when equals 42. And this call references the nullable while 8.4. So if we push this call down below the outer join, then it would become a filter on bar and it would reject the rules before the outer join. So as a result, it might cause the outer join to emit non-extended rules that should not have been formed or that should have been rejected by the calls. So we can conclude that cause a parallel in where or in your join above the outer join cannot be pushed down below the outer join if the reference and the nullable was. Okay, let's move on. So in equivalence classes, for multiple join bar equality clauses a equals b that are not out of the joint delay. We use equivalence classes to record this knowledge. So in equivalence class represents several values that are no at all transitively equal to each other. And in equivalence clauses are removed from the standard call distribution process. Instead, e-class based call clauses are generated dynamically when needed. And the equivalence class is also represents the value that pass key orders by. So the logic here is since since we know x equals y. So order by x should be the same as order by y. Okay, now let's talk about joint order restrictions. So if we have outer joins, actually we cannot perform the joint in any order. Outer joins induce joint order restrictions. And one side outer joins like left join or right join constraints the order of joining partially, but not completely. And in general, non-full joins can be freely associated associated into the left hand side or outer join. But in some cases, they cannot be associated into the right hand side. So let's look at two examples below. For the first example, a left join b predicate p a b and then in the joint c on P a c. So this inner join can be associated into the left hand side of the left side. So we can we can get a inner join c on P a c and then left join b on P a b. And let's look at the oh sorry, let's look at the second example. So a a left join b p a b and then inner join c on P b c. So for this time, this inner join cannot be associated into the right hand side of the left join. So even though there is a joint clause between b and c, actually we cannot join them first because it's illegal joint order. And and for for the non-full joins, we flatten them to the top level join list so that they can participate fully in the joint order search. And meanwhile, we record information about each other join in order to avoid generating illegal joint orders. So another thing we do as a little pre-processing is to remove useless joins. For left join, if it's inner real is a single b-strain and the inner real attributes are not used above the join, and the join condition cannot match more than one inner side row. Actually, the join we are just duplicate is left input. So in this case, actually, we can remove this join altogether. So for example, here we have a query and the query is selected 4.8 from full left to join. A sub query is a sub query is selected this thing to c as c from bar sub 4.8 equals sub point c. So so so for this query, the inner real is a is a single b-strain or sub query, and the join condition cannot match more than one inner side row since attribute c is unique because here we are using distinct. So this left join can be removed. Okay, now let's move on to the second phase scan joint planning. So at this phase, we deal with the from and the where path of the query. And also we know about all the by so that we can we can design more joints and to avoid a final thought if possible. So for example, for this query and the query is select stuff from full join bar on 4.8 equals bar point c and full point b equals bar point d all the by b and a. So when generating the merge join, actually, we can start table full by a and b. We can also start table full by b and a right. But since we know that the final output is is requested to be sorted by d and a. So we chose a sort key d and a for table full so that we can avoid the final thought. And the second phase is basically driven by cost estimates. So for scan joint planning, basically, what we do here is to first identify feasible scan methods for these relations, estimate their cost and the result sizes, and then search the joint orders base using a dynamic programming or heuristic genetic query optimizer method to identify feasible plans for joint relations. And meanwhile, we all know how to join ordering restrictions to avoid generating illegal joint orders. And the and here and here for each relation base real or join real, we produce one or more passes. So so multi multi joins have to be built up from pairwise joints. Because that's all the executor knows how to do. So for any given pairwise joint step, we can identify the the best input passes and the joint methods such as nested loop or merge or hash join. While three forward cost of comparations resulting in a lethal passes much as for the for for the base relation. And finding the best ordering of the pairwise joints is a hard part. So usually we have many choices of joint order for multi we join query and some others will be cheaper than others. So if the query contains only inner joints, we can join the base relations in any order. But with other joints, as we have talked about, they can be reordered in some but not all cases. So we handle that by checking whether each proposed joint step is legal. And this standard joint search method, we construct the joint tree level by level using a dynamic programming algorithm. So first, assume assumes that we have already generated passes for each base relation. Then we generate the passes for each possible two-way joint and then for each possible three-way joint and then four-way joint until all base relations are joined into into a single joint relation. So here is a demo about how we construct the joint tree. So the query for the demo is select stuff from a joint, b joint c, b point g, equals c point g, a point i equals b point i. And as a first step, we generate the passes for each base relation. And then we generate the passes for two-way joint. For a joint b, we create two passes, one is, for example, hash joint and one is merge joint. And then we figure out that the possible hash joint is in inferior to, so we just discard it. And here for a joint c, actually we don't try to make joints with them because we found that there is no joint clause between them. And then for b joint c, we create two passes and discard the inferior one. And as in we generate the passes for three-way joint. So for joint real a, b and base real c, we create two passes and discard the inferior one. And for joint real bc and base real a, we also create two passes, but then both are discarded. So both are discarded when we're competing with the existing pass. So at last we pick this pass, which is merge joint a and b first and then merge joint bc. And joint searching is expensive. So actually an a-way joint problem can potentially be implemented in any factorial different joint orders. So usually it's not feasible to consider all possibilities. And what we do here is to just to use a few heuristics, like as Shoyin said, demo, we don't consider clause less joints. And with too many relations, by default it's 12. We fall back to the genetic query optimizer. Heuristics use in joint search. So as we said, we don't join relations that are not connected by any joint clause, unless first two by joining all the restrictions. And also for large joint problems, we try to break it down into sub problems according to some collapse limits. For example, here we have a query and we have 10 tables joining together. So by setting the joint collapse limits back to four, we can break it down into sub problems that's no more than four we join. And then we can handle each sub problem independently. Okay, so now let's move to the third phase post scan joint planning. So at this phase, we deal with group by aggregation, window functions, distinct. We also deal with set operations like uni, intersect and accepts. And then we apply final sort if needed by order by. And for each of our steps, we produce one or more passes. And at the last we add lock rules, limit modified table steps to each surviving pass. So this is a third phase. And the last phase is post processing, where we expand the best pass to plan. And then we adjust some representational details of the plan. So such as we flatten sub query range tables into a single list. We label was in upper plan those as outer one or inner one to refer to the outputs of their sub plans. And we also remove unnecessary sub queries again, the pain and the module plan knows if we figure out that they are not doing anything useful. So this is the first the last phase. And this is all for my talk. Thank you. you you
|
This talk will focus on how the planner works from a developer’s view and elaborate on the process of converting a query tree to a plan tree in details. We can divide the planning process into 4 phases: preprocessing, scan/join planning, post scan/join planning and postprocessing. In this talk, I will explain what stuff is performed in each phase and what is the motivation to perform that. Topics will include: transforming ANY/EXISTS SubLinks into joins, flattening sub-selects, preprocessing expressions, reducing outer joins to inner joins, distributing quals to rels, collecting join ordering restrictions, removing useless joins, join searching process, upper planner path-ification, cost estimation, etc. Tom Lane's 2011 talk "Hacking the Query Planner” talked about the overview of query planner. In this talk, I will cover the internals of query planner with more details and in a way more close to planner codes. I hope this will be helpful in understanding the internals of PostgreSQL's planner and in hacking the planner codes to improve it.
|
10.5446/52135 (DOI)
|
Hello everybody, this is my presentation on implementing system variant temporal table. It includes everything about system variant temporal table, starting from what it is up to its implementation. If anyone wanted to have this slide, I uploaded it to my PGCon account, so it is available on conference websites. First, a few things about me. My name is Suraf El-Tamaskan. Currently, I am working as a DBA. I also have a work experience on full stack web application development, and starting from the last one or two years, I am participating on PostgreSQL development. My latest contribution is adding with style option to FIrst close. It is on PostgreSQL 13, and I can be found on Twitter at space file address on the screen, but I didn't tweet much yet. In this presentation, first we will see the definition of system version temporal table, and how it differs from other temporal table kind. And then we will see the application of it. In PostgreSQL, we have at least two options to implement system version table. One is using two table for storing, one for current data, and the other one for history data. Another alternative is using one table for storing both current and history card. I will talk about both options and the advantage of implementing it using one table approach. Finally, we will talk about using syntax and their function that system version temporal table will introduce. System version temporal table is introduced to SQL standard along with application time period and other temporal specification in 2011 release. It is about the retention of deleted record along with current one automatically by database management system and the ability of creating both current and history record. And there are application time period which are for meeting business model, having time varying information, and it is managed by user rather than by database system. And it is legal to have both system version and application time period in a single table. We can use system version temporal table for that sake of easily recovering from accidental data modification. If we enable system versioning to the table, we can go back to any point in time using SQL query. We can also use it for auditing to check how things look like in the past. And other interesting use cases to model reality in place of application time period. If model reality changes at the same time with database modification, we can use system version temporal table because it have advantage of maintaining by the system automatically. We can also use it to analyze how things look like over time. In post-release SQL, we have at least two design for implementing system version temporal table. One is on wiki page that describe implementing using two tables. One for history data and the other for current data. But I don't go with this approach because I don't know why I feel in most parts. I propose to implement it using only one table for storing both current and history record. And it is on current commit feast. I will talk about both approach and I describe more on my approach and its advantage. Two table approach involves two tables. One for current data storage and the other for history data. One big advantage of this approach is non temporal query which are the frequent query performance didn't affect by history data because it's only access current table. But nowadays we can achieve similar performance by partitioning. And in this approach all the record generated by delay and update operation inserted to history table using trigger and multiple common have to create implicitly to make correlation between current and history record. In one table approach both current and history data stores in one table. It uses end time column and value to classify the record. If the value is infinity it is current record. Otherwise it's related at a specified time in the time column. And system version the table treats like a kind of generated column in its implementation. Because the syntax is similar and generated column passed the test of time and the result of non temporal query doesn't have to include or consider history record. So filter codes will be added to such kind of query implicitly. There are two create system version table. There are two option to create system version table. If it is okay to use default system version in column names. Just adding with system version keyword at the end of table definitions. Enough. It enables system versioning with default system versioning column name. But if you wanted to use different name for system versioning column name we have to use X standard option as specified on the screen. Stay two time stamp with time zone column with generated always as the start and end time specification. In my implementation both current and history data stores in one table. But it's perfectly legal to have multiple record having the same primary and unique key value in the past. The conisterant only have to be observed on current data or current records. And in order to allow this happen raw in the time column is added to primary and unique key conisterant implicitly. Because in current record raw in the time column and values always infinity. So that force primary and unique key value to be unique across current record. But in history record raw in the time column and values different. So primary and unique key value can be repeated. In that system versioning column set automatically. Raw start time will be set to current transaction time. And raw end time column will be set to infinity. Here there are opinion on the appropriateness of setting raw end time column value to infinity. If we take this value literally it means raw will be current until infinity. But it's not true. The record will be removed sometimes before infinity. More meaningful values to store variable now. Which have the meaning the meaning data is current until now. But which I think it is more appropriate value. But storeable now variable didn't exist. So even SQL standard stay to use longest date data type value. In postgres SQL it is infinity. Normally update didn't occur in place. So it's performed by marking the tuple to be update to delete. That can be actually removed by other process. And then insert the updated tuple into the table. But for system versioned temporal table implementation delete tuple will be inserted back with raw end time column value set to current transaction time. Not make the record history record and insert the updates tuple by making it counter-recount by setting raw start time column value to current transaction time and end time column value to infinity. In delete the record didn't actually removed. Instead it's inserted back with raw end time column value set to current transaction time. Not in the record being current record. In select non temporal carry should not incorporate or consider history record. So history data filter close added to it implicitly. Advantage of one table approach. Well table is not any complex than non temporal table alter operation. But in two table approach modification in current table should have to propagate to history table and other advantages there is no need of technical column to correlate between history and current data. And if we support implicit updates for marking deleted tuple as a history data we can enjoy many optimization. Because there will be no need of vacuum exit for freezing and the data will be stored sorted by raw start and end time column automatically. So we can harness multiple optimization for temporal query processing on this behavior. The first form of temporal query is for system time as of close. It is used when there is a need to see the table at specified point in time in the past. It returns all the record current at specified point in time which means column and value less than specified point in time and which means raw start time column and value less than specified point in time and raw end time column and value greater or equal to specified point in time value will be returned and the syntax is specified on the screen. The other form is form close which returns all the record current within specified time rate including time point one that's excluding time point two. In closing it is the same as form close except it includes the record current at time point two. So it is the only difference from closes. It includes in time point two its return value. Between asymmetric closes is the same as between symmetric or between close. It is just more descriptive and between symmetric close here the order of time point specification is not matter because it work by picking the least time point and return the record up to and including the greater time point. There are a new syntax introduced to enable system versioning to existing table which is alter table add system versioning. It use default system versioning column and name and if the table content data it will be marked to current data by setting raw start time column and value to current transaction time and or in the time column value to infinity. If we want to use different column and name for system versioning we can do it by issuing add column and statement to raw start and end time column with the desired column and name in one command as seen on the screen. But it is forbidden to issue add column and command separately because it leaves system versioning in unusable state. If we want to if we don't want system versioning number or if we want to delete history data up to now we can issue drop system versioning to the table that drop system versioning column along with history data. We can use it with add system versioning command to limit the growth of system version table. And here are other alternative command to disable system versioning. So if anyone want to look at the patch and review it is on current commit fist please review if possible. Thanks.
|
SQL standard 2011 introduce a temporal table concept which describe a table that store data with respect to time instance. It has two part application time period table and system versioned temporal table. An application time period are for meeting the requirements of applications based on application specific time periods which is valid in the business world rather than on the basis of the time it was entered in the database and system versioned temporal table state about the retention of old data alone with current data automatically by database management system and ability of queering both current and history data. A table can also be both a system versioned table and an application time period table.
|
10.5446/52139 (DOI)
|
Hi, PGCon. This is Thomas Munro. Thanks for tuning into my talk. I'm going to be talking about a couple of projects of mine to reduce IO stalls and memory stalls in a couple of areas that I've been hacking on recently. So this is a hacking track talk describing work in progress. The title of the talk is borrowed from performance guru Martin Thompson, who popularized mechanical sympathy as a term to describe a style of programming that's very conscious of the hardware. So I'm applying that type of thinking to our favorite blue elephant. A bit about myself. I am a Postgres developer and committer. I recently joined the Postgres team at Microsoft. Before that, I worked full-time on Postgres for about five years at Enterprise DB. I've used Postgres as an application developer since I hate to say it, but released 7.4. First, I'm going to talk about an IO project and some context around that. And then I'm going to talk about a memory project and some context around that and some areas for other exploration. I'm going to break predictions about future disk access into three different categories. The first is not data dependent. It's just a bet that you're going to access the same data repeatedly. If you've accessed it recently and or frequently, you're probably going to want it again soon. That's why we have caches. The second category is heuristics about sequential access. If you seem to be accessing blocks sequentially, you're probably going to keep doing that for a little while longer. That prediction is also not data dependent. It can be done by lower levels of a storage stack. Those predictions enable us to make much larger read sizes which can get all kinds of efficiencies from lower levels. The third category is data dependent complex access patterns that require specialized logic that understands what's going on in the data. For example, if you're doing a B-tree scan in order to know which pages you might want next, you're going to have to look at the data and understand what it means. That can't be done by lower level parts of the system automatically. At the moment, we rely on the operating system to detect sequential scans. We hope that it will do something efficient with that. That mostly works pretty well, although there are plenty of cases where it doesn't. I'll mention those in a minute. Then we do some explicit hinting of random access for bitmap heap scans where we know in advance that we're going to be accessing some non-sequential blocks. We can tell the kernel about that. There's a similar case inside some vacuum cleanup code. That's pretty limited. The way it works is by calling prefetch buffer. Prefetch buffer is a function in Postgres that checks if the buffer is already in the buffer pool. If it isn't, it issues a hint to the operating system, which I'll talk about in a moment. There's also some hinting about writeback, which I won't be going into in this talk. That involves the sync file range system call. What does prefetch buffer actually do? At the moment, it checks whether the buffer is already in Postgres' shared buffers. If it is, then it doesn't have to do anything at all. If it isn't, then it tells the kernel that we're going to be reading it soon. It does that using the POSIX F-Advise, we'll need hint. That's a hint to the kernel that we'll soon be doing a P-read. If it could please start organizing to get that data into its own kernel buffers, that would be great. Then hopefully, if the stars are aligned correctly, then when we eventually call P-read, it doesn't need to sleep. It just returns instantly copying the data out to user space, and that's all it has to do. As far as I know, that only actually works on Linux and NetBSD today. I looked at a whole bunch of different operating systems, and I couldn't find any others where it worked. Even on those systems, it doesn't work on ZFS, which is an important file system that I really like. I'm personally interested in trying to get that fixed, but I make no promises about how and when that might happen. A bunch of work is being done by Andrus Freund. I'm hoping to help him with that, and I'm sure others are as well, on introducing real asynchronous IO to Postgres. For more on that, you could go to or rather tune in to Andrus Freund's talk at PGCon 2020. Even if we switch to a real asynchronous IO in Postgres in the future, that doesn't really change the things that I'm talking about in this talk, which is, when should we begin prefetching buffers? Which is kind of an orthogonal question, orthogonal to how we actually do that. There's a whole bunch of other opportunities we could take to predict IO and prefetch stuff. We could do a better job with sequential scans. There's a bunch of opportunities to prefetch index pages. Nested loop joins, this is much more ambitious. You could do some kind of prefetching and block nest loop join optimizations, considering several keys at once. Finally, while replaying the wall in crash recovery or on a replica server, you pretty much know what blocks you're going to be accessing, and that's the topic I'm going to dig into next. If you were at PGCon 2018, then this might all sound very familiar to you because Sean Jittenden presented the PG Pre-Folter project, and it's something that Joi used to fix their problem with replication latency. If I remember correctly, they were using large RAID systems with many spindles, so they had a certain amount of concurrency, IO concurrency available, and until they started doing this prefolting, they weren't able to take advantage of that. The approach that I'm proposing doesn't have a separate process, it runs inside Postgres, and it runs in fact as part of the main replication loop, because it turned out there were a whole bunch of really tricky problems to do with staying in sync that solved. I also think that it provides a more natural pathway towards proper async or an asIO in the future. Let's take a look at the contents of the wall. I'm going to run an insert statement to insert two rows into a table T. The table happens to have an index on it, and you can see that that simple statement generated five wall records, and you can see that with PGWallDump. It produces much more output than this, but I've just taken the interesting bits. The only thing we really want from this is the physical block references. Everything else in the wall, with some very minor exceptions, is not really relevant for prefetching purposes. Here is a simple depiction of the recovery process running. It processes the wall record by record, and it's full of those instructions that say, hey, I need to insert a tuple into the heap. I need this page, please, and the recovery process is going to go and see if it can find it in the Postgres buffer pool. If it can't find it there, it'll have to read it in, and it'll go to the operating system and say, P-read, and that's going to potentially involve a stall. We have to wait for the storage system to come back with that data and give it to us. That's time when the recovery process is not even running. It's not on CPU. It's just sleeping, waiting for an IO completion event to wake our process back up, and for our synchronous system call to return. It's a terrible waste of time. It's bad because the wall might have been generated by a primary server that was running many backends, and although each of those backends, perhaps there were 50 backend processes and they were all suffering from IO stalls from time to time, but those IO stalls were overlapping. They wrote a bunch of records into the wall that recovery is now having to play back sequentially, and so it'll take all of those overlapping stalls and turn them into non-overlapping stalls so they get added together and you get replication lag. Our goal here is to try and get back the overlapping stalls so that you can hopefully run at the same speed as the primary server ran or faster. The idea with recovery prefetching is to have a second readhead that runs slightly further ahead of recovery in the wall, and it simply looks at all the records coming down the pipe, checks what blocks they reference, checks if they're already in the buffer pool, and if they're not, it asks the operating system to begin reading that block in so that hopefully, by the time recovery gets around to needing the block, it's already there and it never has to sleep. That's the goal. Now it assumes that the storage system can execute more than one read at the same time, and you have to tell it with a setting how many parallel reads it should initiate. Since for now we don't have any completion event delivery to Postgres, we assume that the asynchronous read operation is completed when we finally replay the log record that caused the read to be initiated. So that's a conservative model of how many IOs are currently in flight as a result of the scheme. So how does it look to use? Well, there's a pair of controls, maintenance IOConcurrency and max recovery prefetch distance. Maintenance IOConcurrency is a general GUC that exists already that is used to control prefetching for any kind of maintenance tasks. So I'm categorizing wall prefetching or recovery prefetching as a kind of maintenance task. And max recovery prefetch distance is the main on-off switch for this feature, and the patches I've been posting it on the hackers list, it defaults to having it on. You can set it to 256 kilobytes, pretty arbitrarily, minus one turns it off. And there's a view, PGSTAT prefetch recovery, and that shows you some counters, and you can see how many pages have been prefetched already, how many have been skipped for various different reasons. So the most interesting one is probably Skip Hit, it tells you how many pages it found were already in the buffer pool, and so there was no reason to do any prefetching. But there's some other reasons that are a bit more technical and I'm not going to go into in this talk. And another interesting number is the Q-depth, that tells you at this very moment how many prefetchers are in flight. There's also some averages for those, and you can reset this using the standard PGSTAT reset shared if you want to clear all these counters. Here's an example that shows unpatched Postgres first, using IOSTAT to measure the IO generated by a PG bench with 16 connections, 16 threads. So you can see that the primary server is generating 3466 reads per second, and it has a Q-depth somewhere around 16, or it kind of fluctuates, but something like that, so it's sort of reflecting the number of clients, and they're all stalling quite often as they fetch pages from disk. And so that's generating the worst case scenario for unpatched Postgres and the best case scenario for my experimental patch. So you can see that the replica is only generating 250 reads per second, and its Q-size floats around one, and it's just not able to keep up, and that's pretty much reflecting the number of concurrent IOs, right, 1 versus 16. And when we used the patch, then I should add that this experiment is done with full page writes off, which gives you the absolute best case scenario. I'll talk about that in a moment. So using the default maintenance IO concurrency with the patch, which is 10, so the replica running with 10, it's now able to generate 1,143 reads per second. Now we know that's not quite enough, because we need to, it should be somewhere around 3, somewhere over 3,000 to be able to keep up with the primary server, and we can see that the Q depth is now hovering around 7. And if we crank it up, which is, you know, we were targeting 10, but for various reasons there were really only 7 at a time because of slightly different accounting, because of, well, because we don't count things that were already in the kernel's cache, you know, we consider those to be still a prefetch, even though nothing was done, no IOs, hit the storage layer. There's that, and then there's the fact that we count the end of the prefetch activity very conservatively when you finally p-read the page, whereas it actually finished before that. So for various reasons the Q depth seen in IOS stats is going to be lower. So if I crank it right up to, here I went right up to 50, and finally that was able to generate more concurrent IOs than the primary is generating with 16 sessions, and that finally gets me to a situation where it's able to keep up, and in fact go faster and therefore catch up, and so that solves the problem. So that sounds pretty good, but it isn't always as good as that. That was pretty much the best case scenario. It works best with full page writes off because full page writes avoid the need for reads. Mostly you hear people complaining about the bad things about full page writes, which are that they generate a ton of extra wall on the primary server, but they do have upsides as well, and one of them is that they avoid IOS stalls on replicas because they don't require you to read anything if you're completely overwriting a page, there's no need to read. It still works pretty well even with full page writes on if there are infrequent checkpoints and certain access patterns so that pages get modified many times between checkpoints and the working set is much larger than memory. It also works well for systems that have a storage page size larger than Postgres' 8K pages. For example, I know from Joints Talk that they were using large CFS records, ZFS, however you want to say it, which I think they use 16K pages, I can't remember, but that means that even when there's a full page write, eventually that page gets written back and then the operating system is still read before write to actually do that. So prefetching would be good, although as I mentioned this particular prefetching technique won't work with ZFS yet. It would also be useful with full page writes on if we were to adopt an idea that appeared on the mailing list recently. Somebody asked why we don't, you know, the whole full page writes thing is about not trusting pages that might be torn because of power loss, but if you've got checksums on and you read a page, you ought to be able to say, well, if the checksum passes then this must be a non-torn page. So you could actually read pages even if you have a full page image in the wall and then you might be able to skip a whole lot of work if you read it and find that the LSN is too high. So that's something we're not taking advantage of. We're finished up replaying a lot more than we have to because of full page writes. If we were to take up that idea, then this concept of prefetching would become more valuable with full page writes on. If you follow, it's a bit confusing. So another problem, just a small implementation problem really that just requires more work is that currently there's a separate X-log reader to decode the wall for prefetching purposes. So that's something that would need to be improved. That's about all I have to say about that project. Yeah, it's actively being proposed for Postgres 14 and there's a commit-fest entry and a thread and everything. Yeah, have a look. So now I'm going to change gears a bit and talk about memory. While I was working on parallel hash joins for Postgres, I read a whole bunch of different papers about different aspects of that problem. This one caught my eye. It talks about how hash joins suffer from data cache misses. That's not as surprising when you think about what a hash join does. It's accessing memory in a pattern that the hardware prefetching systems can't predict. So therefore, unless the whole thing fits in some level of the cache hierarchy, then you're going to have a load of misses. So this paper investigates one way of dealing with that. There are designs that try to avoid hash misses by partitioning very carefully so that the hash table partitions fit into say L3 cache or maybe even L2. The problem with L3 cache is that it's shared by multiple cores. The problem with L2 cache is that it's tiny. So this is quite a complicated thing. The survey papers that I read, it sounded incredibly complicated and not that clear that it always wins. So that's not something that I tried to do for parallel hash join. But it stuck in my mind as an interesting problem. This alternative approach to the problem I think is pretty interesting. So the idea is to use the prefetch instructions that you find in all modern architectures at just the right time to get things into your caches before you need them. So it's pretty much the same thing as we were doing a few slides back except that was disk and this is memory. So there are some famous examples of people complaining loudly that this stuff never works. In particular, if you try and prefetch just one pointer ahead in a linked list or something like that, it just doesn't work. It doesn't make any sense. It's not really a pipeline at all. It's just not helping. And there was a famous case where the Linux kernel used to have a bunch of prefetch stuff sprinkled around and it all got ripped out because it was actually slower than it was actually making it slower. For one thing, it was prefetching off the end of chains and it wasn't prefetching the head node because it didn't know where that was. It was prefetching the next one, but the length was usually one. So the next thing was usually null and then that was causing some type of stall itself. So there are other examples you can find on the internet of people saying, hey, don't use this. It never works. But if you can get far enough ahead, it clearly does and it's pretty easy to measure that it's a useful technique. So I tried to do that with Postgres hash joins. So let's try and run a couple of queries to demonstrate these effects. So the first thing is on unpatched master, you can see that when the query gets divided into 256 batches so that the hash table fits into the L3 cache on the system, so it's 2.4 megabytes there, I think probably 8 megabytes of L3 cache here. It runs in 4.2 seconds and it's generated 6 million LLC misses. But when I tell it to use 1 gigabyte of work mem so that there's no batching required, now it's generating 28 million LLC misses. So that's a pretty crazy number and we see that it now takes 5.8 seconds. So adding more memory made this hash join slower. In the patched version, it goes from 4.2 down to 4 seconds when it is small enough for the L3 cache. So that's an improvement already. Similar number of LLC misses, which I think is a clue that these misses are coming from the partitioning phase. And then when we go to the 1 gigabyte version, it's now running in 2.7 seconds. It's loaded 482 megabytes of data into the hash table, no partitioning phase and it's winning. And it's generated only, I say only, 2.4 million LLC misses. So this seems to be pretty successful. And although this is a pretty contrived query, you can measure speedups in TPCH queries and so on. I haven't done good enough testing to include any on slides just yet, but yeah, that's something I'm looking into. So why does that work? Well, remember that hash joins have two phases, well, three phases or two phases, depending on whether you partition first. And in the build phase, actually we spin through all the tuples on the inner side and we copy them into memory and insert them into the hash table. So the change here is that instead of inserting them into the hash table, we just load them into a little buffer that knows which bucket they need to go into. And it's got a size, I think I was using 64 here. It doesn't seem to be too sensitive actually, as long as it's more than a few. And whenever that gets full, then I flush it. That's when I do all the insertions into the hash table. Now that on its own is responsible for a small amount of the speedup, even without any prefetching. And then you can add the prefetching. Now that you've got a whole list of tuples that you know you're going to insert, you've got the pointers, you know the buckets, you can prefetch the bucket headers and then you can insert them. And it goes faster. And so one of the reasons for going from 4.2 seconds to 4 seconds is that it's just that rearrangement of the code because the CPU is now able to reorder some stuff. It doesn't have dependencies on values that haven't been computed yet and so on. So that's a kind of mechanical sympathy that's good to be aware of. And the other thing is that once you add the prefetchers now, it won't miss when you actually do the insertion. That's relatively simple. And I did that bit first. Actually I did that quite a long time ago, more than a year ago I think. And I knew it was effective but I sort of knew that the main part of this problem was really the probe side because the probe phase is typically bigger. We usually probe with a larger relation and it was a whole lot less obvious how to do it. And what I've come up with so far is in order to get my hands on tuples so that I can, on the outside so I can see far ahead, I actually create a little buffer of extra slots and then I copy tuples into those slots and then the rest is sort of fairly obvious. You do a sort of pipeline of computing hash values and prefetching. Although in this case you can prefetch further ahead when we're inserting we only need to prefetch the bucket itself because that's the only thing we're going to be touching. Whereas when you're going to be probing, there's actually a whole chain of things that you're going to finish up touching. There's the hash table bucket, there's the tuple that it points to and then there's the tuple that that one points to and figuring out how to get all of those things to be prefetched in time is sort of the name of the game. I think ideally there would be some much more efficient way to look further ahead at the tuples coming from the sub-plan, this concept of copying everything into extra slots. I know there's some other nodes that do that kind of thing like during copying and so on, but sorry doing sorting and so on. It feels like there should be an efficient way to move things around without having to materialize things all over the place. I haven't got further than a basic copy-based system for now because I just wanted to prove a concept of the thing I was really trying to research which is the hash join part of it. But I know that even with this apparently really stupid version of how you should buffer tuples it does make for example the TPCH queries go faster, particularly with large memory size. So apparently the overhead is worth it. I mean I think that's generally a theme with all of this kind of work, anything involving anything where you can avoid stalls. It can be worth doing quite a lot of extra work to avoid stalls. Well that brings me to the end of my talk and I hope you found some of this stuff interesting. I've put a few pointers to things that I mentioned along the way and of course pointers to patches. Thanks for listening.
|
This talk looks at the mechanics of memory and storage, and discusses a selection of opportunities for PostgreSQL to reduce stalls and improve performance. These include experimental and committed work done in the PostgreSQL and OS communities, along with some relevant ideas and observations found in academic papers. The following topics will be covered: * avoiding I/O stalls in recovery, index scans and joins * limiting I/O streams for parallel queries and concurrent queries * avoiding memory stalls for hash joins, sequential scans, index searches * avoiding branches through inlining and specialisation * reducing TLB misses for data and code
|
10.5446/52140 (DOI)
|
Welcome to this presentation about PGAgral, a high-performance connection pool for Postgres. My name is Jasper Petersen and I work for Red Hat. In this talk we'll explore the architecture of PGAgral and thereby which use of visible features that can be implemented. We'll briefly look at how to do a deployment and the functionality of the management tools. Next we'll take a look at performance runs against other Postgres connection pools. And last we'll explore the roadmap of PGAgral and close with some thoughts. PGAgral is written in C and released under the 3-Clause PSD license. Let's start with the question. What is a connection pool? First, a connection pool must provide database connections to clients. Second, it provides a central access point to a database cluster. So a connection pool is an advanced proxy to one or more database instances and based on its features it can perform authentication pool connections in order to limit the overhead when a client obtains a connection and provide additional services around connection management. Of course, adding a connection pool adds an extra layer in the overall system architecture so its benefit needs to be weighted against the extra complexity and management. The overall architecture of PGAgral is process-based. This means that if a client crashes a connection it won't take down the entire pool as only the client process is terminated. Using a process model means that we need to have a shared memory model in order to maintain state across all processes. PGAgral uses LibEV for its network communication. LibEV is a fast network library that supports different event mechanisms. The state of PGAgral is maintained using atomic operations so a modern compiler is required to build the project. The Postgres native protocol is used for all communication which is implemented inside PGAgral so no dependencies is required for this. So the only dependencies are LibEV and OpenSSL for the runtime. This is an overall picture of the PGAgral architecture. On the left we have clients connecting to PGAgral that are implemented using different programming languages. They will interact with the security layer that defines how they will authenticate. Each client will be a process noted by C1 to C4 and the shared memory model will maintain state. PGAgral will communicate with Postgres using another security layer, will expand on these concepts in the following slides and look at the component breakdowns. First the shared memory structure is the central data structure for PGAgral. It is an in-map memory segment shared across all processes that contain the overall configuration and state of PGAgral. This includes the configuration settings from the deployment, the state of each of the connections done as an atomic assigned char, information about the Postgres service, limits and access control, known uses and the data structure for the connections. This structure gives all the processes a unified view of the runtime state of the entire system. For security we have two different layers, one layer towards Postgres and one layer for the connecting clients. In PGAgral we can configure authentication for the clients using different modules which are trust except without the password. We can reject the connection and then we have three different modes which requires a connection from the client. These are password which is plain text, MD5 and finally ScramShare which is the most secure method. Last, there is all which will use the same authentication mechanism as Postgres for the user database pair. So this setup provides us with the flexibility to for example use ScramShare for the clients but use MD5 or even trust when authenticating against Postgres. Password management is done based on a master key that is unique for all user stores. Each store uses symmetric encryption using EIS 256. Clients can communicate with PGAgral using the Chinesport layer security version 1.2, 1.3 for added security. All security functionality is implemented using the OpenSSL library version 1.0 or higher. During authentication we need a connection and this is where the pool component comes into play. The pool maintains the state of each of the connections. It provides an API to the rest of the system like getting a connection, returning a connection or even kiddling a connection. It also has methods for management operations. The connections are done as a flexible member array such that only the memory required for the specified maximum number of connections is allocated. This keeps the overall memory requirements for PGAgral down. But having a connection pool isn't enough as we may need to limit the number of connections available to a specific user or database. A limit rule will define a user, a database, a both and the number of available connections to the rule. The pool will choose the batch matching rule and allocate a connection based on that criteria. The number of active connections is maintained with an atomic unsigned short. This allows us to split the overall pool size into smaller sub pools and better control resources used for users and databases. For all of this to work we need to be able to communicate with both clients and Postgres itself using the version 3 of the Postgres protocol. PGAgral needs all message types during startup and when authenticating using the different authentication mechanisms. Each process has a fixed memory buffer which is used for communication. This means that we can allocate it in a static variable. The actual communication is done through an API that supports socket based communication or SSL based communication. Each client is a process and it follows the same steps. First it needs to authenticate where it will obtain a connection from the pool. If successful it will set up a pipeline instance which we'll come back to. Next is the interaction between the client and Postgres through PGAgral. Once done the connection is either returned to the pool or killed, lest the process exits. So what is a pipeline? The pipeline instance defines the behavior of the interaction of the client to PGAgral and from PGAgral to Postgres. There are currently two different pipelines in PGAgral. The first one is a performance pipeline which only looks for a terminate message from the client and a fatal error from Postgres. This makes the pipeline extremely fast as there is very little overhead. The second pipeline is a session pipeline which is like the performance one but also support TLS. The pipeline instance is triggered based on events from LeapEV. These events are created based on the status of the underlying network. There are different ways that these events are triggered, for example through select or E-Poll mechanisms. Last we have management of the pool. This is done over a Unix domain socket using a specific tool. The management layer allows the client process to send its socket descriptors to the parent process in order to transfer ownership. The management layer also allows for management operations to be implemented such as flush, disable a database, getting the status of the pool or performing shutdown. Now that we have gone through the architecture of PGAgral we can write a user visible feature set. PGAgral is of course a connection pool with trust password md5, mskymshia, security functionality. It supports authentication, pre-fill and pooling of connections. It can limit connection based on either database plus user, user only, database only or by the general pool. TLS is supported between clients and PGAgral. We can pre-fill the pool during startup and maintain a minimum number of connections based on a database and user pair. This requires a user vault to be specified. Idle connections can be removed after the specified number of seconds or turned off. In order to account for invalid connections we can perform connection validation either when a connection is obtained by the pool or in the background on idle connections. We can also turn this functionality off. Access to a database can be turned off and enabled again. And we can perform either graceful shutdown which allows all existing clients to finish or a fast shutdown that will do the shutdown at once. PGAgral can run in demon mode allowing it to be run in the background. User vaults are done using a master key and AES 256 symmetric encryption. And there is a runtime tool to control PGAgral and an administration tool to create the master key and the vaults. Here we can see the options to run PGAgral. There is the configuration file which contains how PGAgral is configured and how the Postgres instance is accessed. There is the host based access file which defines how client connect and which network mask are allowed. Next there is the definition of the limits for each database and user. This file is optional. Last there is the file that defines all the known users. This file is also optional. Let's do a simple configuration. Check our website for the full details of these files. The first file is pggral.conf which contains the main configuration. The main section is called pggral. We bind port 2, 3, 4, 5 on all network interfaces. We will do locking to our file at info level. There will be a maximum of 100 connections that has an idle timeout value of 10 minutes. Validation is off and we will specify the UNIX socket directory. Next we will define a section called primary which will be a local Postgres instance running on port 5, 4, 3, 2. The next file is the host based access file called pggral underscore hba.conf. We will let Alice connect using GramShare from all networks. Bob is restricted to the 10 network but will use MD5. All other users will use the same mechanism that Postgres will be using. The pggral underscore databases.conf shows that Alice and Bob both will have 5 connections prefittled each and they can use a maximum of 10 each. The remaining connections, 80 will be used for all other users. Now we need to generate the pggral underscore users.conf file. First we need to set the master key using the pggral admin tool. The master key must be 8 characters long, used at least one uppercase letter, at least one lowercase letter, at least one number and at least one special character. Once the master key is set we can add Alice and Bob with their passwords. We can now run pggral with the configurations file we just created. The log file shows that pggral has started. We can now use the Postgres client like psql to connect to pggral. Once Alice is done the connection will be put back into the pool. The pggral CLI tool provides the management operations like flushing the pool, enabling and disabling one or more databases. It also allows you to see the status of the pool and shut it down. The administration tool is pggral admin and it creates the master key and the user vaults. Now we'll take a look at the performance of pggral. The tests are run on Red Hat Enterprise Server 7.7 based machines on a 10G network. We'll compare pggral against three other connection pools which we'll call A, B and C. The same identifier will be used in all graphs. The version used were the latest on January 14, 2020 and all of them were configured with performance in mind. The tests were run using the pgbench tool. But please run your own benchmarks. The first graph is pgbench using the prepared mode. The x-axis is the number of clients and the y-axis is the number of transactions per second. The second graph is pgbench with the prepared and read only mode. Again the x-axis is the number of clients and the y-axis is the number of transactions per second. The first thing that is important to note is the increase in transactions per second compared to the other pools which means that pggral can drive more load towards postgres. The second thing that is important is that pggral can reach transactions per second value a lot sooner than the other pools. This could mean that a much smaller machine can be used for the pggral deployment in order to drive the same load. Looking at performance in general, pggral will use around 5MB of RSS memory and the overhead of each connection is around 67KB. This means that pggral has a very small memory footprint. Due to the libEV event mechanism and the fixed network buffer, there are basically zero allocations at runtime in pggral itself. This means that pggral is very cloud friendly as it uses a limited number of resources to manage the connections and it scales well using the CPU resources available. In the future there will be an IOU ring module for libEV which should make pggral faster. This will require a Linux 5.6 or later kernel. The next release of pggral will be 0.7 which will have Prometheus support for monitoring and remote management such that administrator doesn't have to log into the machine. Of course there will be other improvements and box fixes as well. Check the release notes once it is out. The roadmap for pggral is maintained on our github account and contains features like failover support in order to promote a replica instance to a primary instance, a high mobility setup where multiple instances of pggral work together as one pool, support for running selects on replicas if the transaction is read only, a transaction pipeline that returns the connection after each transaction has ended, and a query cache. Check the issues tab and vote. So if you found this talk interesting feel free to try out pggral yourself and do your own performance benchmark in order to see how it compares to existing connection pool deployment. If you like what you see give the project a star on github and follow us on twitter. You can vote for features or create a new feature request if something is missing. And of course contribute. Every contribution to the project is most welcome as we move towards building an advanced connection pool implementation for Postgres. Thank you for your time and hope to see you in the pggral community. And we're live now for a Q&A session with Jesper. Jesper you please go ahead. Thank you. Yeah so there was a couple of questions during my session about pggral and the first one was what the major differences was between pggral and the pools out there. And I would say that one of the major differences is that pggral is process based which basically takes a lot of the architecture is done in turn. Then there was a question about pooling modes pggral currently has and currently we have session pooling but traction pooling is on the roadmap. Then there was a question about let's see if the command rules could be scripted to be scraped for monitoring jobs such as permissed and actually the seven release just went out yesterday which has native Prometheus support on its own internal server. The sea pool has a human readable format course both in brief mode and in more else. And then the question about why pggral can't do select and replicate moment is currently on roadmap but there is definitely not to consider around this functionality. Have to have a read only action but we also have to set a stuff like location lack and the parameters around that based on the you put. And I think that was it for the questions. Okay. If that's all the questions then we're done. Thank you very much. I appreciate your help. Stan, thanks for hosting. Oh, you're welcome. My pleasure. Goodbye. Thank you.
|
pgagroal is a high performance protocol-native connection pool for PostgreSQL. pgagroal is built upon libev, a high performance network library, and uses a shared-memory process model together with atomic operations for tracking state to achieve its high performance. In this session we will explore the * Architecture * Features * Deployment * Performance * Roadmap of pgagroal.
|
10.5446/52142 (DOI)
|
Hello everyone, welcome to my session. Today I am going to talk about PostgreSQL on Kubernetes and explain how we run it at Zalanda for more than two years in production. First of all I would like to introduce myself. My name is Aleksandar Kukushkin and I work as database engineer for Zalanda. People in the PostgreSQL community know me as the Patroni guy because I am a maintainer and major contributor to the Patroni project which implements PostgreSQL high availability. Agenda of today's talk. First I will give some brief introduction to Kubernetes, then I will explain how we use PIL and Patroni in order to deploy and run highly available PostgreSQL clusters on Kubernetes. After that I will explain how Zalanda PostgreSQL operator helping us to do high level orchestration of such deployments. And finally I will go through the list of typical problems and horror stories we hit by running a few hundred production deployments. So what Kubernetes is? Kubernetes is a container orchestration system. It is built as a set of open source components which are running on one or more physical servers. In fact, the physical server could be the hypervisor which you typically get from your cloud provider and probably most of Kubernetes deployments are done on top of cloud infrastructure, not on the physical hardware. Kubernetes implements idea of infrastructure as a code. That means that you describe all your infrastructure as a set of manifest files which you keep in your source control system. In case if your application is able to scale you can configure Kubernetes to run more or less application instances depending on the incoming load and depending on how your hardware is utilized it is able to either allocate new instances from your cloud provider or shut down the running instances. One may think about Kubernetes as a next logical step to automate infrastructure after you already used Ansible chief for puppet. Probably I should also tell a few words what Kubernetes is not. Kubernetes is not an operating system. You cannot just install Kubernetes on top of physical hardware. In fact, Kubernetes is running on top of Linux system. In case if your application is not able to scale Kubernetes will not automatically make it scalable. You still have to invest into making your application scalable and supporting running multiple copies of it. If you think that after migrating all your infrastructure to Kubernetes you can fire your DevOps or system engineers you are totally wrong because Kubernetes is very complex systems using a lot of layers of abstraction. It's always a good idea to have somebody who understands how it works and is able to debug problems on every level of Kubernetes. If you run only like two, three or maybe up to ten servers probably Kubernetes is not something what you should be looking for. It's possible to manage such number of resources manually without big issues. Like if you take a high level overview on Kubernetes there are different types of nodes. There are master nodes and the main component on the master node is Kubernetes API service. Master node also is taking care about scheduling the pods where your application is running on the working nodes. Optionally it could be running ATCD on the master nodes but it's also possible like we do to run ATCD separately. And for high availability you need to run more than one master node at a time. Or at least in case if you are doing Kubernetes upgrade you need first to start the new master node and only after that stop the old one. So how we run Kubernetes at the land? We have not just a single Kubernetes cluster but in fact we have more than 140 Kubernetes clusters. Not all of them are production, it's like 50-50 distribution between production and test systems. Deployment to production cluster supposed to happen only via CI-CD pipeline except maybe a few exceptions. Like exception if there is an incident in progress or like in case if you want to debug some very strange issue in production. In this case you need to request explicitly the access and such access either requires incident tickend to be opened or your access request must be approved by one of your colleagues. So how to run stateful applications on Kubernetes? When Kubernetes and Docker were just emerging a few years ago they didn't really have the good stories for running stateful workloads. Particularly for us situation has changed and right now we have two very nice abstractions, two very nice components in the Kubernetes which allow us to run stateful applications. One of such components is persistent volumes. What does it mean? Like you can use with the help of some storage plugin you can allocate persistent volume from the external system like in the cloud provider you typically use EBS or AWS or maybe Azure disk if you run on Microsoft Azure and if you run in the data center there are options available like ASCASI, NAS, SIN, NFS and so on. And the second very important component is the stateful set. It allows us to run the fixed number of application instances or pods in the Kubernetes world and those pods will have stable and unique identifier. And the second very important property of stateful set that it is able to take care about volume allocation for you and every pod in the stateful set will get the unique volume which is allocated for this pod and in case if pod is migrated between Kubernetes nodes the persistent volume will be attached to the new node and pod will get data back. So it's essential for stateful application to keep state somewhere on disk or on external volume and stateful set abstraction is really handy to solve such issue. In order to run application on Kubernetes we need some container. Typically people using Docker containers like in most cases although there are different possibilities and we run Docker and we have the Docker image which we named SPILO. SPILO by the way is a elephant in Georgian. In the single Docker image we package all supported PostgreSQL versions. Nowadays it's starting from 9.5 and up to version 12. On top of that we have plenty of useful PostgreSQL extensions which are not part of PostgreSQL. We also have some tools for PGQ for backup recovery like world war g and PGBouncer. Of course we keep PG data on external volume because in case container if restarted like all data is gone and on external volume we can persist it. And for high availability we are using Patroni. Like as it is common we are relying on environment variables for configuration of our Docker container. So what Patroni is? It's automatic failover solution for Postgres. It is implemented in Python and one Patroni Python demon is managing a single PostgreSQL instance. What PostgreSQL instance is? PostgreSQL instance is one of instance of a PostgreSQL cluster. It's either primary or one of the replicas. And Patroni is supposed to be running on all of such instances. Patroni is able to use Kubernetes API service in order to do leader raise. And basically Patroni makes PostgreSQL the first class citizen on Kubernetes. It's possible to deploy PostgreSQL on Kubernetes without any external dependencies like ATCD with the help of Patroni. And Patroni will take care about like a lot of stuff. It's possible to deploy the new cluster easily. It means that one of the nodes will run needDB and other nodes will take base backup from the initialized node. Patroni has a good story for scaling in and scaling out. And Patroni also helps a lot to manage PostgreSQL configuration. Like for example it takes care about keeping max connections in sync between all members of PostgreSQL cluster. How to deploy it on Kubernetes? Like we create the stateful set object which will create pods across multiple nodes and provision persistent volumes. One of the nodes will become the primary. Like on this picture it is marked as a role master and other nodes will be running as replicas. Patroni is using in this case the leader endpoint for a leader election. And the same endpoint is telling the leader service how to access the master pod. It's also optionally able to, it's also possible optionally to create the replica service which will do the load balancing, return load balancing across pods which are running as replicas. The master node with the help of Wally is doing backups and continuous archiving of write a hat logs to 3 storage on AWS. And replica pods in case if they need some files they can get them from S3. But usually replica pods are just using streaming replication in order to get write a hat logs from the primary. So how to deploy it? Like of course you have to write a lot of YAML, 100 lines of code. And after that you can easily deploy the single manifest and get all these objects ready and you get your PostgreSQL highly available clusters, cluster up and running on Kubernetes. But after that there are a few problems which needs to be solved. Like one of such problems is like reconfiguration of the cluster or performing the rolling upgrade. Rolling upgrade could be due to change of docker image with the PostgreSQL in order to perform the minor upgrade or it could be the Kubernetes rolling upgrade itself. Because Kubernetes major version upgrade requires writing of all working nodes. And in case if you are unlucky we can get into very strange situation. Like let's take a look how this Kubernetes rolling upgrade works. First like we got into situation that there are three nodes which must be upgraded. And on these three nodes distributed across three different availability zones there are three PostgreSQL clusters are running. And the first step Kubernetes will terminate the first node to be upgraded. And of course Patroni will do a failover and primary will move to another node which was previously replica. After that like of course we get some termination connections, termination of connections, application connections they will have to reconnect and there will be some unhappy people which are relying on this application but usually doesn't take long time just couple of seconds. Not nice. But Kubernetes does a job it will reschedule pods which were running on node which was terminated on the new node and will continue with rolling upgrade. The next step it will terminate the second node and we again got some failovers and again some connection interruptions and people are getting a bit more upset and unhappy. Kubernetes of course reschedules pods from one node which was terminated on the new node it gets persistent volume attached and replicas will finally start streaming from new primaries. But take a look like we got all new primaries at the moment which are again running on the node which is supposed to be terminated. And Kubernetes terminates the last node, working node for major upgrade and we again got some failovers. And like if you didn't count it how many failovers has happened for every cluster, A, B and C I did it for you. And like for cluster A we got 3 failovers and for clusters B and C we got 2 failovers. 3 is very unlikely case because for the cluster with 3 pods we can experience up to 3 failovers during such major rolling upgrade. And on average for cluster of 3 pods you will get 2 failovers and if you are very lucky you will get only 1 failover. This is something that should be improved. We can improve it with high level orchestration. So before we start implementing something let's take a look how the PostgreSQL cluster lifecycle looks like. It's not only about Kubernetes, it's about any PostgreSQL deployment. First you prepare some configuration, you create some PostgreSQL cluster and by deploying the configuration after that you connect to the cluster, you create some databases, create a user and so on and so forth. It will be running for a while until you decide to change something. Therefore you update the cluster configuration, you redeploy it and perform some actions like rolling restarts of PostgreSQL or rolling upgrade with switchover or something like this. And these 3 actions will continue in the loop for a while. It might be a few months or a few years until you decide to decommission your cluster finally. And you would like to automate absolutely everything. You would like to automate deployments of a new cluster, you would like to automate all upgrades and user management, configuration management and of course like whenever we do this cluster configuration the number of interruptions or failovers must be minimized. And in order to do so we build the Lambda PostgreSQL operator. So if you are not familiar with operator pattern the idea is that operator is an application which acts like a human and on behalf of the human and it implements all human knowledge of operating certain resource like PostgreSQL cluster. Like if you are a human you can know how to do something with minimal impact and in order to do such operation at scale we can implement the software which will do it for us. And Kubernetes is very extensible system like you can use not only existing Kubernetes objects or Kubernetes abstraction but you can define your own Kubernetes resources. And PostgreSQL operator defines custom PostgreSQL resource and it's possible to create instances of such resource which will be representing your PostgreSQL clusters. And depending on event what happens like either the new resource was created or existing resource was updated or deleted PostgreSQL operator acts upon such events and changing corresponding Kubernetes objects. And in case if there is some ongoing maintenance happening operator also could help with it. So how such PostgreSQL manifest look like? It's a very short YAML file there are a few essential variables which you have to fill up like the cluster name it's important because it must be unique across the whole Kubernetes cluster within the same namespace. You need to define how big should be the volume for your data, how many pods you would like to run within your stateful set and what PostgreSQL version you need. Optionally you can define the memory and CPU requests and limits and users or databases which you would like to create. If you didn't fill it up it's fine. Operator will either use some defaults or will not create databases and users for you. But creation of such users is very handy because operator will not only create the user and the database it will also create the secret object. So how does it look like on the high level? We got the operator pod running and it defines the custom resource and happy dbDepployer which could be either the human user or it could be the CD pipeline creates a new cluster manifest and PostgreSQL operator starts acting upon it. Operator will create cluster secrets which will contain the user names and passwords for PostgreSQL super user and PostgreSQL replication user. Optionally it could also create cluster secrets for client application which will just use them seamlessly. Also operator will create the service endpoint and it will like in order to access the PostgreSQL cluster and of course it will create the stateful set with fixed number of pods. Stateful set will take care about provisioning and persistent volumes. How operator helps us to solve the rolling upgrade problem like Kubernetes rolling upgrade? We again got into the situation where there are three nodes to be upgraded distributed across different availability zones and there are three nodes which were spin up for replacement. And operator does it smart. It can detect that there are some pods running on the node which to be decommissioned and before actually doing something it will first terminate replica pods which are running on such nodes in order to migrate them to the new nodes. And basically we got already replicas running on the nodes which already migrated, upgraded on the new nodes. And after that operator will do the switch over, control switch over and postgres, patron will shut down postgres gracefully and remove the leader key by telling replicas that you are good to go to start leader race. And after that we get replicas running on the new nodes and the former primaries will be still running at this moment on the nodes which are not yet upgraded but operator will also terminate them. And Kubernetes will reschedule such pods on the new nodes and everything is fine so we got only one failover for every cluster. And just try to think about case when you have a few dozen of nodes and a few hundred of postgresql clusters running on Kubernetes. Like none of the humans could do such smart moves in order to minimize number of switch over. What machine can like application code and postgres operator really helps us a lot to minimize the number of failovers and solve such problem. Since we run our deployments on AWS we got some issues with AWS infrastructure. Like if you are not familiar with AWS API it supports whatever you do with your deployments with your Kubernetes and if it interacts with AWS infrastructure it will call the AWS API to do something. And in case if you run many nodes and many volumes and so on the number of API requests to AWS infrastructure will be relatively high and you might start experiencing the AWS API rate limit exceeded errors. It's not nice because it delays deployment of new clusters like it's not able to provision persistent volumes and some rolling upgrades of existing clusters also are delayed because it cannot attach persistent volumes to the new node but we can live with it. Some bigger issues are coming from EC2 instances which are running on the failed hardware. AWS is doing actually pretty good job by detecting such failed hardware and they are terminating such instances gracefully. But such instance shutdown might take ages and unfortunately we never probably seen that it takes less than 30 minutes to complete. And all EBS volumes which are attached to such failed instance could not be reused and it's not possible to attach it to another instance and therefore like all Postgres clusters which we are running on this instance will be in the little bit degraded state. If you run more than one port you will have still the primary running but there will be no like either not enough replicas or just no replicas at all. And since we all always monitor such situation we get paged due to such issues and especially it's unpleasant if you get paged at night. Of course the disk space is not really related to Kubernetes but since we run a few dozen a few hundred of Postgres clusters from time to time we have such issues like application might write quite a few data just in the limited time frame and in the Kubernetes world and especially like in the cloud infrastructure world it's very common nowadays to use just a single volume for pg data, pg wall and pg logs. Why it's like it's quite a common practice because you are paying per gigabyte of the volume and your performance of the volume depends on the volume size. Basically provisioning to separate volumes could become very expensive and therefore people prefer just using the single volume. When Postgres is trying to write something it gets the error not space left on device and shuts himself down. Patroni does a pretty good job it detects that Postgres is not running and starts it up in recovery does it promote and postgres apparently after promote again starts writing some data on disk again get no space left error and shuts it down and such crash recovery loop could continue forever until postgres will not be able to start at all. So the only solution to such problem is monitoring disk space and preventing disk space to go down to absolutely zero. Since we are running in the cloud we always have the option to implement after extent so far we didn't do it. So why? I will tell why because we did some analysis of why disk space was used so heavily and out of so different many cases there is just a single one which might probably require after extent. When your data is growing naturally and clean up drops are already configured and there is no possibility to free up the space anymore. And it's very easy to extend the volume in the cloud but it's not so easy to shrink it back and the only way to do so is removing the existing volume and creating the new one which is smaller. And of course it will require you to copy the whole data like whole PG data directory which takes time. Another interesting issue it's maybe again not really related to Kubernetes but it is still very funny. We have the monitoring for like backups and like we want to make sure that every cluster gets a base backup every night. And for sometimes for clusters we notice that backup is missing. We start investigating what is wrong and we see that Wally was failing with the error message that there is a file which is bigger than expected Postgres data file. And Postgres data file usually does not exceed 1GB and the threshold in the Wally was 1.5 GB and there was apparently a file which is even bigger and it was nearly 2GB in size. And this file was PGstat statements, surprise. Like we started investigating why it is so huge, why it is so bloated, we did a select from PGstat statements and we found a lot of similar queries like with a very little fraction of differences. Like mostly these queries are generated by ORAMs and they could really make the PGstat statements very bloated. And the other issue of PGstat bloated PGstat statements that it's not only taking space on disk but like if you try to query it like your Postgres backend process will use more than work mem. Like in this case Postgres backend will become 2GB in size because it will have to read the whole PGstat statements from the disk into a memory. And another issue with Wally. Wally unfortunately is a very old backup tool, it's reliant on exclusive backups and therefore like all exclusive backup issues not coming for free. Like what exclusive backup when it started it creates a backup label in the PG data. And in case if such pod was terminated and you are trying to start the Postgres back it will find that there is a backup label in data directory, it will think that it's oh something is strange, like online backup was cancelled and after such attempt to start it up like you will not be able to start such Postgres at all in the future. And the only way to bring such replica back like the former master back of the replica it's rebuilding it, initializing. There are backup tools which are doing better job, they implement non-exclusive backups like rep manager like PGbackrest and WALTG. The PGbackrest is not an option for us because we target to support multiple different clouds and people but PGbackrest so far supports only three storage. With WALTG like it supports all cloud providers as a Wally but every time when I'm trying to use it in production I find some new issues with WALTG therefore I'm a bit scared. So far we are using WALTG only for restoring backups and write-it-logs but we are taking backups still with WALTG. Yet another issue it's out of memory killer, it could happen that the POS was terminated with signal9, like we all know that we should not kill POS with sikkill and we know for sure that it wasn't a human who did this and when we start investigating like you run dmessage and in order to get the system log from the container and you see yes the POS gris process was killed by the sik group due to out of memory but the process ID in the container and the process ID on the host are different therefore it's not always possible or easy to draw the line between those events. Sometimes people are using omescore just trick in order to avoid POS gris being killed by out of memory killer. In the container world it's not possible I would say because we have only POS gris and patroni running and it will still kill someone, the question is either patroni or posgris. And it wasn't very clear how the memory accounting works, like how the POS gris process managed to get nearly 6GB of memory usage out of 8GB. Ok 2GB are used by shared buffers but where are these other 4GB are coming from. And it was puzzling us for quite a long time and there is also a different out of memory flavor. It wasn't posgris which was killed but we noticed that sometimes pod have the high number of restarts which sometimes happens due to restart of the node but therefore pod will be restarted. But in this case it wasn't the node restart because node uptime was still high a few days and we started looking into pod events and there is a message something like sandbox changed and therefore pod will be recreated and killed. We again go into the container by doing QPCTLexec, we run the dmessage and we see more or less the same picture out of memory killer but somehow scores for every process in the container are absolutely the same. And the score is like has a very strange number, minus 998. Why this number looks like this is because the memory request and memory limit are specified exactly to the same value. Therefore like Kubernetes is using the special treatment for such containers. It uses the quality of service which is guaranteed and they see group interface cannot do anything good except killing maybe one of such process in the container randomly. And it turned out to be that the post process was unlikely to be killed. So how we mitigated out of memory killer? Just like we tried to reduce shared buffers from 25% to 20% it helped a bit but not 100%. But what apparently was the issue is that when Postgrease is writing something to disk it generates a lot of dirty buffers in the virtual memory of the Linux kernel. And by default such dirty pages are limited by 10% of available memory on the node. Like in case if you have 32GB of memory on the node you may write up to 3GB of dirty buffers more or less. And if your container configured to be smaller than 3GB basically you can run into out of memory without doing memory allocation resources. Unfortunately it's not possible to set dirty background bytes and dirty bytes for per pod or per container it could be set only per node. So did we and it solve most of our issues. One more problem from Kubernetes and like more probably it's more from the docker. Because we get requests from our employees, from our colleagues that oh Postgrease processes are getting some strange errors, Postgrease connections are getting some strange errors, no space left on the device but there is plenty of space on data partition. When we start looking on it closely we see that it could not resize shared memory. Before it's not a data device, it's a cool print but a Dev HM. And in the docker Dev HM by default it's only 64MB. It's possible to change in the docker by specifying the special common line argument. But for Kubernetes the only way to fix it is a mountainous dedicated device to Dev HM. So we did it in Postgrease operator and so far the problem also has gone. Human errors like because cluster manifests are created by humans sometimes they are doing a bit math things they might specify the resource requests and limits absolutely inadequate. Their pod cannot be scheduled because all nodes are much smaller than you request resources for your pod or another issue people are so greedy they would like to save money and specify very little CPU and memory request and therefore container is killed by omkiller before it actually managed to do something. A couple of times we've seen that service account used by Postgrease operator Spilla and Patronium were deleted by employees. In one case it was deleted by mistake and in another case the employee just wanted to see what will happen if he will remove the service account. The service account is used it contains credentials to access Kubernetes REST API service. And in case if there is no valid credentials Kubernetes API service could not be accessed and Patronium cannot update the leader key and therefore it restart Postgrease read only. It took some time to figure out how to better fix such situation but we managed to do it with minor impact. And last but not least it's YAML formatting. YAML supposed to be the human readable and human writable but in fact it's neither one of those. Even with such a small YAML manifest people could very easy to misformat it and therefore we had to came up with web UI where you fill up cluster name and choose from the drop down list like a few other arguments how this cluster should be generated. As a result we generate the YAML definition file and which you can copy into your clipboard right into file system and apply to Kubernetes. It also solves quite a few problems. So coming to conclusion, coming to the end, Postgrease operator helps us to manage more than 1500 production Postgrease clusters deployed across 80 Kubernetes clusters on AWS and without such high level of automation it wouldn't be possible. Just imagine like managing manually so many highly available clusters and in the cloud and especially on Kubernetes due to multilayered infrastructure you have to prepare to deal with absolutely new problems and failures scenarios. You have to invest a lot into yourself and to learn all these infrastructure issues and how to solve them. You need to always find the solution and prevent a permanent fix otherwise the problem will always come back. And of course everything I have been talking about it's open source projects. Postgrease operator Patronian Spill, it is hosted at the London GitHub organization. You can try to run it on your laptop and in case of any issues feel free to open the GitHub issue or a pull request. What is better which is fixing this issue. So thank you very much for attention and in case if you have some questions I'm ready to answer. We're back now with a Q&A with Alexander. Go ahead please. So hi. So I hope you enjoy my presentation and I've got the following questions. And like I will start from the first one which is from Pasha Golub. Right now most of the configuration changes even PGA HBA causes rolling update when pods are restarted. Do you have plans to patch this behavior? If no, do you need any help with this? Yes, like I don't like such behavior either and we certainly have some plans to improve on that and in case if you are willing to help all contributions are always welcome. The second question. Does it mean the data is duplicated? I don't really follow what is this question about. Since we are relying on streaming replication in Postgres we have data duplicated, sort like Postgres data duplicated between masters and replicas always and I don't know what to add. And one more question, do you have any, that was question from Guillen Deere if I can spell it correctly. Do you have any load balancer like PG pool in front of your replicas or do clients connect directly to Postgres pods? Like we don't have any magical load balancer like PG pool which does split between read and read write queries. So, usually clients connect either to the primary load balancer like primary service in Kubernetes or they connect to the replica service which does load balancing across multiple pods. So, it is also a single endpoint for client connections and it takes care about connecting you to the pod which is running as a replica. And regarding load balancer recently we did a release and it's now possible to deploy the set of PG balancers in front of primary service and there is a separate Kubernetes service for primary PG balancers. And like you can have three different services and you can choose between them like either you can connect to the primary via load balancer or you can connect to the primary via PG balancer or you can connect to the replica service. And in future it will be possible also to connect to the replicas via load balancer like PG balancer for example. And the last question from Stevenson, if you talk about 10% duty, dirty buffer limit that is set on the node level, what property controls that? Is it a kernel setting or a kernel or docker Kubernetes setting? Since Kubernetes is running on Linux, it's a typical Linux setting, it's a VM dirty background ratio and VM dirty ratio. And one of them is set to 10% by default and the second one is set to 20% and it's controlled via syscatel like on Linux or maybe you can also do it via procfs. And it is set always per Linux machine, per kernel, it's not possible to set it per namespace like C-group unfortunately. And it would be good to have such possibility to do it per like docker container or per pod but unfortunately it's not the case and we have to do it per node. So yeah, probably that was it, no more questions in the chat. I have nothing to add. If there's no more questions then that's it. Thank you very much, I appreciate you coming in. Yeah, it was nice seeing you. You too. Bye.
|
Many DBAs avoid any kind of cloud offering and prefer to run their databases on dedicated hardware. At the same time companies demand to run Postgres at scale, efficiently, automated and well integrated into the infrastructure landscape. The arrival of Kubernetes provided good building blocks and an API to interact with and with it solve many problems at the infrastructure level. The database team at Zalando started running highly-available PostgreSQL clusters on Kubernetes more than two years ago. In this talk I am going to share how we automate all routine operations, providing developers with easy-to-use tools to create, manage and monitor their databases, avoiding commercial solutions lock-in and saving costs, show open-source tools we have built to deploy and manage PostgreSQL cluster on Kubernetes by writing short manifests describing a few essential properties of the result. Operating a few hundred PostgreSQL clusters in a containerized environment has also generated observations and learnings which we want to share: infrastructure problems (AWS), how engineers use our Postgres setup and what happens when the load becomes critical. * [Zalando Postres-Operator] * [Zalando Patroni]
|
10.5446/52148 (DOI)
|
Hi everyone, I'm Yu-Gon Nagata. In this session, I would like to talk about the way for updating materialized views rapidly. First, let me introduce us. I am Yu-Gon Nagata and a software engineer at SROASOS Inc. Japan. I am in charge of R&D and now working on incremental view maintenance called IVM. This is today's topic. Takuma Hoshiai is also a software engineer and a member of our IVM project. He will show a demonstration in this talk. This is the outline of this talk. Firstly, I'll introduce incremental view maintenance. Shortly, this is a way to refresh materialized views rapidly. Then, I'll talk about our implementation of IVM and progress since the initial patch. Next, I'll show some behavior examples, including demonstration and performance evaluation. And finally, I'll summarize this talk. First, a view is a virtual relation defined by a view definition query. The query is executed when view is referred to by a select statement. On the other hand, materialized view is a view whose results are stored in database and this enables a quick response to clients. This is useful for example for analyzing large data in the data warehouse. However, materialized view needs to be maintained after a base table is modified to keep consistency between views and base tables. Refreshing materialized view is a way of the maintenance. Then, refresh materialized view command is issued. The contents of materialized view is updated to the latest state. However, this needs to be computing the contents of the view from scratch, so it takes a long time. With concurrently option, the materialized view is refreshed with a weaker look, but it still needs to be computing the contents from scratch. Incremental view maintenance, IVM, is another technique to view maintenance. This computes and applies only the incremental changes to the materialized views. This figure shows IVM formally. From the base tables contents and view definition, we can compute the contents of materialized view. After base table is updated, we get new base table. Using this and the query, we can get the updated materialized view. This green path is a recomputation and what refresh materialized view command starts. On the other hand, we can compute the delta of view, changes on views from changes on tables. We can apply this to the materialized view. We can get the new materialized view data without recomputing. This red path is a process of incremental view maintenance. The first patch of our IVM implementation was submitted to PSKL hackers a year ago, and the subject is implementing incremental view maintenance. I also had a presentation about it at PGCon last year. Using this IVM feature, materialized views can be updated automatically and incrementally when base tables are updated. You don't need to write a trigger to maintain materialized views by yourself. How effective is our IVM implementation? This is an example of TPCH query 1.0.1.0.1. This is a query to aggregate on a large table. With skill factor 1, this query took about 11 seconds. While select on the materialized view took only 3 milliseconds, so it is a quick response. However, if the rest of this materialized view took 24 seconds, on the other hand, one table on the base table is updated. Its incremental maintenance took only 22 milliseconds, that is, the view is updated so rapidly. In the initial patch, we supported selection, projection, inadjoining, distinct class, and build with tuple duplicates. Additionally, the current patch supports also some aggregates, self-join, outer join, sub queries, including existing class. It also supports refresh with node data commands, low-level security, and pgdamp and restore. I'll also explain the progress in this talk. Okay, I'll explain the basic theory of IVM. The view definition is described in a relational algebra form. This is an example of this simple natural join of table R and S. Change of base table is represented like this. This navla, inverted Greek delta, means tuples, derived from the table. And the Greek delta means tuples inserted into the table. Using these view definitions and these changes, we can calculate the changes on the view like this, and update the view by applying these changes to views. This is an example. Contents of table R and S are like this. The natural join view with contents will be like this. After table R is changed, the deleted and inserted changes are like this. The changes on the view are calculated by joining table R changes and base table S respectively. Finally, the view is updated by applying calculated changes like this. In the bad timing of view maintenance, there are two approaches. Immediate maintenance and deferred maintenance. In immediate maintenance, views are updated in the same transaction where the base table is updated. In deferred maintenance, views are updated after the transaction is committed. For example, when view is accessed, where as a response to user command like refresh were updated periodically and so on. In our implementation, we started from immediate approach since it requires less number of calls. The third approach needs a mechanism to manage logs for recording changes of base tables. Implementing this is not trivial work. This is the overview of IBM implementation. There are three processes. First, when a base table is modified, its change is extracted. Second, changes on view is calculated from changes on tables and base tables and view definition query. Finally, the changes on views are applied to the view and view is updated. In our implementation, table changes are extracted using after-triggers and transition tables. Changes on view is calculated based on relational or bug algebra. And the changes are applied to the view using SQL query. For creating materialized view with IBM support, we use create incremental materialized view. This is a tentative syntax to create materialized views with IBM support. Incremental keyword is an extension of our implementation. At creating materialized views, after-triggers are created on all base tables. These are created automatically and internally for insert, delete, and update commands and as a statement limit trigger. And with transition tables. Transition tables is a feature of after-trigger. Using these, changes on tables can be referred to in the trigger function, like normal tables. There are two tables. One contains tables deleted from the table. Another contains tables inserted into the table. In theory, these tables are corresponding to numeral and delta-r respectively. When calculating delta on views, we use the views definition query with some relights. In these relights, the modified table is replaced with a transition table, like this. And multiplicity of duplicate tables are counted using count-as-t aggregate, like this. In theory, the results correspond to numeral v and delta-v. Finally, changes on view is applied to view. When deleting tables from views, tables to be deleted are identified by joining the delta table with the view. The tables are deleted as many as specified multiplicity by numbering numbered using row number function, like this. Then inserting a table into the view, tables are dedicated to the specified multiplicity using row-rate series function, like this. Next, I'll explain our progress since the initial patch. Now we are supporting some aggregates, self-joins, avatar-joins, and some queries, including exist-close. We also support role-level security, refresh wizard with no-data command, and pg-dump restore. And so on. We are now supporting some built-in aggregates function. That is count-sum-min-max-average, with or without group-by-close. As a restriction, expressions specified in group-by-close must appear in the target list of the view. Aggregate views have one or more hidden columns. For example, count-value is stored for multiplicity of duplicated tables, and count and some values are stored for average function. A value is performed on table-deltas, table changes, and the values in the view. Aggregate values in the view are updated using these results. The way of updating view depends on the kind of aggregates function. Here are examples of updating aggregated values. We can update count or sum value by simply applying the result calculated from the data tables. Average can be updated using sum and count values stored as hidden columns. In many or max cases, it becomes more complicated. When tables are inserted, the smaller values fit when the current mean value and the value calculated from the data table is used. When tables are deleted, if all the mean values are deleted from the view, it needs to recompute the new values from base tables. Here, I'll show a simple performance evaluation of aggregate view. I created two materialized views of aggregates. On PG-Benz account, the scale factor is 1000. One is a normal materialized view, and another is with IVM option. These are the results. The refresh of the normal materialized view took more than 30 seconds, while updating the table in the base table took only 30 milliseconds. It's a thousand times faster than the normal refresh. So IVM is so rapid. Of course, the view was updated automatically and correctly. Next, I'll explain about simultaneous modification of multiple tables. This is possible when modifying CTs, triggers, or polling key control range is used. Note that self-join is essentially the same situation since same tables in an query can be regarded as defined tables with same contents. In theory, then multiple tables are modified simultaneously. We need pre-update state of tables as well as post-update state of tables. In our implementation, pre-update table state is available by applying table deltas inversely. Especially inserted tuple can be removed by filtering with X-min and C-min system columns. This is a query to get pre-update table state. Upper part is removing inserted tuples and this part is affending deleted tuples. In addition, when multiple tables are modified simultaneously, after triggers are fired more than once. Each trigger extracts and stores each tables change. The view is updated incrementally in the final of the trigger call. Next, I'll explain after-join support. In after-join, there are new extended tuples that is dangling tuples, which appear when the join condition does not meet. So tuples are inserted into a table. Dangling tuples might be deleted from the view and when tuples are deleted from the view, dangling tuples might be inserted into the view. As a result, the view is maintained in two steps. First, calculating and applying deltas similar to in-a-join case. Second, handling additional dangling tuples. I implemented this based on Rathon and Schwarz algorithm with some theory expansion to allow tuple duplicates. Here is a performance evaluation of in-a and after-join view. There are two materialized views. One is in-a-join and another is full after-join on Pgbench tables. Also, there are three conditions of indexes on materialized view. No index without index. And index on the primary key columns of base tables. And additional two index on columns used in join conditions. This is the result of execution of updating tuples. This is the result of execution of time of updating tuples in Pgbench accounts. Refresh of this view took more than 8 seconds. In all cases, IVM is faster than refresh. However, for effective maintenance, an appropriate index is required to search tuples to be deleted. And with additional indexes, the result shows comparable performance between in-a and after-join views. However, for effective maintenance of after-join views, indexes on join condition columns are required to search dangling tuples in the view to be deleted or for inserted. As for subquery support, in our implementation, XGIST in fair close is supported. In implementation, the row number count of the XGIST subquery is stored in the view as a hidden column. And FEN, a table in this subquery is modified. If the count becomes zero, the tuple in the view should be deleted. Otherwise, the tuple remains in the view. Also, we support simple subqueries, including only selection, projection, or in-a-join. Okay. As for the progress, we are now supporting with no data. This is an option for create material view or refresh commands. If this is specified, the triggers are dropped from base tables, and view is not automatically updated even when a base table is modified. Also, the materializer view becomes not scalable. To support role-level security, FEN view is updated incrementally, base tables are accessed with view owners privilege. Also, PGDAMP and Distora are now supporting the new syntax create incremental materialized view. And previously, we are using temporary tables to store view deltas, but it caused several problems. For example, system catalog brought and prepared products and available and so on. Now, we don't use this, we don't use temporary tables, and we are using tuple store instead. Finally, we have improved view access performance. For previous read, the generate series function is used to out-of-foot duplicated tables, and so its overhead was so high. But now, we don't use generate series function at select on views. Okay, now, here is a demonstration. Hoshiai-san will show the demonstration. My name is Takuma Hoshiai. Currently, I am an engineer at SRA OSS in Japan too. I have participated in the development of IBM since last year. I am in charge of some implementation of future and performance evaluation. Today, I talk about simple demonstration of IBM. The environment used is tiny TPCC like tables created with JDBC runner. TPCC is a reproduction of the processes about transportation and sales of products in multiple warehouses. The data load was pre-created with a scale factor of 64. So, this time, I created a database with scale factor 64, and the database size is about 7GB. I try creating a mathes view that analyze the number of payment, total amount, and maximum amount for each warehouse. This can be achieved by joining the three tables about warehouse, distinct, and history table, and using group by clothes and aggregation function. The scale to use is like this. This result has about 640 rows. To create a mathes view with IBM, simply add the incremental option to the create mathes view query. Then, create ordinary mathes view like this. Create mathes view with IBM. We can create like this, only add the incremental option. By the way, IBM Trigger is created to detect the updates of the base tables and automatically updates the mathes view with IBM. We can check IBM Trigger with this command. These IBM Triggers have a new depth type of small m. These Triggers are created or deleted by creating and deleting mathes view and the deflation with no data command. Immediately after creation, the result is the same for normal mathes view and IBM. This result is the same. The different point is that the IBM is automatically updated to the latest state when the base table is updated. For example, add the payment history to the history table. This insert query is finished in a flash and I confirm IBM view. The first row, the max amount has changed and is the latest value. Of course, ordinary mathes view is still old. If we execute refresh command with mathes view, the latest value is reflected by ordinary mathes view. But it takes some time. This example isn't too much time for demo, but the defense may be even bigger for real large mathes view. By the way, in the current logic, the trigger is used to immediately update the mathes view in the transaction. So, rollback command will list IBM. The first row is rollbacked. In this way, after updating the data, you can immediately check the overall impact with mathes view and select Feather to reflect the data update. That's it for the demo. Last, I'll show current restrictions on our implementation. We are now supporting selection, projection, and outer join, distinct, some aggregates, self-join, exist, and simple subqueries. On the other hand, we are not supporting other aggregates, user-defined aggregates, and having complex subqueries, including aggregates, or outer join, and so on, city, window functions, set operations, order by, and limit offset close. After outer join, only simple equal join is supported, and it cannot be used together with aggregates or subqueries. And at exist, we don't support or exist or not exist, and it cannot be used together with aggregates. We checked how many queries in TPC benchmark are supported. As a result, 9 of 22 TPC-H queries are supported, and 20 of 99 TPC-DC queries are supported. In both cases, order by and limit offset close are ignored. In TPC-DS queries, satisfactory queries failed to city ease, and 11 queries failed due to aggregate in subquery. In summary, we are now working on implementation of incremental view maintenance on post-rescue L. This enables rapid and automatic update of materialized views. As progress since the initial patch, we are now supporting some aggregates, outer join, set joins, and some subqueries. As the future works, we plan to relax restrictions. For example, views using city ease, subqueries including aggregates, and so on. Also, we would like to work on deferred maintenance using table change logs. We would need more performance improvement and some optimization too. The patch is proposed and discussed in PGSK HACCAS mailing list. The subject is implementing incremental view maintenance, and they are our GitHub repository. We would appreciate it if you give us any feedback, any comment, any suggestion. We are waiting for your feedback. Thank you. Thank you.
|
Materialized views is a feature to store the results of view definition queries in DB in order to achieve faster query response. However, after base relations are modified, view maintenance is needed to keep the contents up to date. REFRESH MATERIALIZED VIEW command is prepared for the purpose, but this has to recompute the contents from a scratch, so this is not efficient in cases where only a small part of a base table is modified. Incremental View Maintenance (IVM) is a technique to maintain materialized views efficiently, which computes and applies only the incremental changes to the materialized views rather than recomputing. This feature is not implemented on PostgreSQL yet. We have proposed a patch to implement IVM on PostgreSQL and this is now under discussion. Since the first submission of the last year, we have made a great progress on this patch. For example, some aggregates, subqueries, self-join, and outer joins are supported now. These operations are important since they are commonly used in real applications. In this talk, we will explain the current status of our IVM implementation. The talk includes what problems are under the implementation, its solutions, what the current implementation can do, and what limitations and problems are left.
|
10.5446/52149 (DOI)
|
Hello everyone, this is Peter Eisenthraut recording my presentation for PGCon 2020 from home in Germany. So let's get right to it. So today I want to talk about time series databases and as I usually do, especially at PGCon, I don't want to talk so much about a specific way of doing things or any kind of particular products, but I want to take a topic and actually ask myself, what does this even mean? And then further ask, you know, what does it mean for me as a user or the users that I support and what does it mean for me as a developer of the Postgres Core itself? And that's what I want to do today. So my first premise here is that time series is a use case. So time series is a term that's often thrown around in database circles, it sort of comes and goes over the years as a trend. And oftentimes it comes sort of with a big marketing message that it's a new way of doing databases and it's a new paradigm or what have you, right? So my sort of, yeah, premise here is I consider time series to be a use case, just like we have, for example, OLTP. That's a term that is used a lot, but it doesn't mean anything very specific if you think about it. If you are, let's say, a database administrator or a consultant and you come to a new database system and somebody tells you this is an OLTP system, that tells you something. It doesn't give you any specifics of what the system is doing or any of its performance metrics, but it gives you an idea of what the system is for and what the characteristics are, right? So it probably has a fair amount of updates and a fair amount of reads, usually a smaller transactions, but a lot of them and sort of made a mainly continuous load and probably sort of pretty good uptime requirements and the data in there is valuable, so it has to be probably backed up regularly and so on. And that is understandable and that's sort of a handy term to use and so at the almost opposite end of that, you have OLAP, which also tells you things, it tells you it's probably a very big database, it probably has a lot of reads, maybe not so many writes or the writes maybe bulk loaded depending on the setup. So those kind of terms help us understand in general what the system is doing and then how we have to think about it and how we have to work with it, but they don't tell you anything specific about what products are you using or exactly what the system is doing. So and I think Time Series is useful to think of it in that sense also that it is a use case that gives you sort of a hint of what the characteristics of the system are, but Time Series is not a new database paradigm or a new way of doing things or things like that, so I think that's a good way to frame it and I want to analyze it in that specific way. So what Time Series database is useful for, so here are sort of the typical things you could find, the most obvious one and perhaps in a way that a lot of people don't think about it is anything to do with server logs or web server logs or any kind of logs that have a time stem and some information on it, that's the basic idea of Time Series data. And that stuff has been around obviously forever, but where it becomes more popular over more recent times is anything having to do with measuring real world equipment either in industry or weather sensors or other sensors, you can obviously measure more than weather but that's a good example to think about and you can a lot of measurement in terms of air quality and things like that are popular nowadays so you can do that and as those sensors become more widely available and cheaper there's more interest in that. But also anything having to do with the financial world, those are, that's the data that has time attached and obviously there's a lot of interest in that and just because there's a lot of business in that. And other use cases in science where people need to keep track of things over time and do analysis of that. So I think that's pretty clear where this is coming from. If you want to do a little bit more of a buzzword approach to this, sometimes when you're trying to maybe sell your idea, you need to have a couple of buzzwords, you can also deploy these, the Internet of Things is popular, that's basically the same idea as having lots of sensors connected to the Internet so that's not very different. Another interesting use case that is mentioned from time to time was self-driving cars mainly because there's a lot of data in those that needs to be analyzed either in real time or just during the development process. And that's a, there's a lot of sensors, a lot of data and obviously the data has to be also processed very quickly so that the cars can react in split seconds as they should. So that's an interesting area of research and development where a time series database is relevant. I'm probably not going to put any post-codes database in the self-driving cars anytime soon but the same ideas apply there. So another thing to clarify is that time series database is not the same thing at all as a temporal database and it's perhaps a little bit confusing as both of those terms have time or form of time in their name and they are also popular terms from time to time in the database field but they're not the same at all. Today I want to talk about time series obviously if you're interested in temporal I could recommend last year's PGCon, had a good talk about temporal databases. So you can just pull that up. So to have a simple way to tell them apart perhaps is that in time series you have records with time stems which is the time of when that piece of information was recorded or observed. Whereas in a temporal database you have time stem ranges which in post-codes could be an actual range type or in a general sense a start and end time stem and that is the time stem range when that piece of information was valid or is valid or will be valid depending on what you do. So those are entirely different. You could possibly even have both of them in the same database might be a little bit complicated but so let's keep those apart. So now what makes a time series database? That's the most important thing I want to talk about. It's a use case, a fuzzy term for a collection of characteristics and I have collected here based obviously on other people's research and so on. My analysis I've collected six characteristics that I want to go through of what makes a time series database and then how is that relevant to Postgres and how can Postgres satisfy these criteria. So I'll go through them in detail. So time stem is part of the key that seems kind of obvious. Time stem is usually increasing order. Data is usually more inserted, not so much updated. That obviously becomes interesting for certain performance characteristics. Usually time series databases have a lot of data. Usually individual records and older data matter less. It's more the aggregate that is more interesting. And time series database usually tries to do analytics based on time so there needs to be some support for that. So let's look into that in detail. So number one time stem is part of the key. So this is sort of the obvious thing that makes a time series database. You have data and every row has a time stem associated with it which is when the value is recorded or measured or something like that. And a very straightforward design in a relational database would look like this. Perhaps some ways you can optimize this and do it differently but just as an abstract thing it would look like this. You have a row with the time stem and then usually you have some kind of indication of the sensor in this case or what you're measuring. Now if you're measuring only a single thing or you only have one point of measurement you don't need that but usually you have all of those obviously. So you have some kind of a reference to what the device was and then you have some data that you're measuring and usually not only one you have probably quite a few of those usually so the rows could be quite wide. And then usually the combination of the time stem and some kind of indication of what the measurement point was would be the primary key. Whether you actually put a primary key on it or not or you index it differently is beside the point now. So that is how you would set up a time series database in a very straightforward way. So this does not mean that every single table has to have a time stem as a key. There could also be some reference data. So in this case the sensor table would not have a time stem but the main tables with the data would have that. So that's straightforward. How does Postgres support that? Postgres is great for that. It has really good support for anything having to do with time and date. That is widely regarded for that. Time stem, time stem TZ, time data type that you can use for that. Or if you somehow don't like that you can define your own way of recording time. So maybe I'm not necessarily recommending that here right now but it's something you could also do if you don't want to maybe spend the space on a time stem type. You can measure time somehow yourself and use the integer or big integer if that's the thing you want to do or define your own time stem type somehow. So for the most part all the data types in Postgres are not special. Time stems are not that special. You can put in whatever you want. This is how Postgres as a more general system is different from specialized time series databases that with the notion of time stems is really usually quite heavily baked in. So you can't really use those systems very well for something that's not a time series database. So it's often optimized for a specific way of doing things but if you somehow want to mix different approaches in a database and they're not very good at that in Postgres everything is more generalized. Second point, time stems are usually an increasing order. So that makes sense as you record data, as you get data from the input points, time increases so the time and the values that you record also increases. It's not an absolute because different sort of clients could report out of order depending on latency and all the things like that. So you can build your system around requiring that but it's a good way to optimize for. It will come to that later in terms of what queries and planners could do but this is an easy thing to optimize for that. You think that the values are generally pre-sorted on disk. So how does that work in Postgres? The B3 implementation in Postgres has a special optimization for this append use case. It caches the write most leaf page. So that basically means if you just keep appending values it has already sort of pre-cached the insert point for that. This is only an optimization. It doesn't have, obviously B3 is also fine for other use cases but this specific case is already optimized for that was put in a couple years ago. Also if you have mostly stored data that's great for range partitioning, obviously we'll come to that a little later but obviously in a lot of these time series use cases you want to do partitioning probably by time-stem, that would make sense to do that way. And so if you have ordered data then you usually only writing to one partition so one partition can be hard and kept in memory or the index memory and all the old partitions don't have to be kept in memory so that's how you want to use partitioning so that works well. And also brindindexes are useful for that. If you have brindindexes rely on data being sorted on disk basically so that it can index ranges that's what the name basically means right? Or in brindindex range so if you have entirely mixed data brindindexes can't help, brindindexes summarize ranges so if data is very well sorted you can use brindindexes as an alternative to B3 in certain cases and then you have a much smaller index and that's good. So this works all quite well in Postgres. Some room for improvement here especially this is maybe a minor point here but brindindexes are very useful often ignored by users and there could be more work being done in the planner to use brindindexes in more situations. For example if you want to order values and you have brindindexes that already gives you hints of the order of things and the planner could make more use of that so that's something maybe you think about. Other than that I think this point is quite well covered by Postgres. So then we said new data is inserted not updated. This is key to this whole design approach if you are setting up a time series database. In a let's say in a normal OLTP business database you usually only keep the current state. What is your current set of customers, your current set of inventory, your current set of orders and things like that. And when things change you run updates to change the address of a customer or the price of a product and things like that. In a time series database you basically don't do that and in the really extreme case you never update anything you keep all historical data and when there's a new information you just add a new record with a new timestamp. Now if you do this in a very extreme way that leads to problems that in terms of space usage and application performance. So there obviously has to be usually a middle ground has to be found but that is the idea. You don't update data but you keep all data to some degree. In a way this also overlaps with temporal databases. Here you hear those kind of ideas touch a little bit. The time series way of looking at it is recording of historical state up to the current time whereas temporal database is intentionally describing the range of when information is valid or will be valid also into the future. So this is a little bit where this overlaps. But the general idea is once you have recorded something in a time series you don't change it because that is a fact of the past. Now it could sometimes be that maybe corrections have to be applied if it turns out maybe a sense of us faulty or the clock was off or things like that. So you can't build a system around never allowing any updates but you can certainly optimize for not allowing any updates or not having usually not having updates. So how can Postgres work with that? Postgres is great for that right to a fault and that Postgres is relatively bad if you need to update a lot but if you just append the current heap is great for that. So this is sort of surprisingly not a problem. So then as we mentioned in alluded to time series databases often have or usually have a lot of data because we never change anything as we just said we just keep recording everything new. And as business usually goes these days the more data we have the more value we have whether that is actually factually true. It's to be debated but this is sort of how business is focusing often these days to collect a lot of data and then do some kind of analysis on it. But also because it's possible basically you storage can be had relatively cheaply so it is more feasible than it was maybe in the past to keep a lot more old data. A lot of these sensors and measurement points are relatively cheap. You can set up your own weather stations for really cheap money nowadays and that also applies and then all the other use cases that actually having more measurement points is relatively inexpensive. And you can also say if you want more data and more value you can also measure more often maybe instead of measuring the weather once a day or once an hour you measure it every minute or five minutes or every minute or all the time there is hardly any reasonable limit there. The main interesting point here is that in a time series database there is no natural limit of how much data you could have. If you think about this in a OLTP database in a business database there is some predictable limit of how much data you will have. So let's say if you have a hotel and you record your hotel reservations you know your hotel has a certain number of rooms and you can only have so many people in a room and only like one party and night and they usually stay a couple nights so if you multiply this even if you have a thousand rooms you know there you are going to have a thousand records per day times 300 or so days so that there is a limit of how much data to expect even if you have a chain of hotels and you record all kinds of other things who has over breakfast and things like that there is still sort of an upper limit to the multiplier of how much data you will have even if you are a big retailer and online and there is a limited number of customers you have and a limited number of items in your inventory so you can plan your database whereas in a time series database because of all these factors you know there is almost no limit you can collect as much data as you want or you can afford and whether that is sensible and valuable is perhaps another question that you know how much how useful is it to measure whether every five minutes versus every hour you know that is something that domain experts would have to answer but certainly the pressure is there to always get more and more into these kinds of databases often very difficult to plan for because you do not really know how much data you might want to get. So how can Postgres help with lots of data? So as we know he is not great for space usage mainly because of tuple header and there could be some more optimizations there there are other storage engines being either considered or have already been put out for that reason related reasons to have just more compact storage. Certainly partitioning is there to help with space usage and then based on partitioning sharding which partitioning over multiple hosts is something that there is some support in Postgres for that. But there are certainly lots of ways to improve that so here is a long list of how do we make handling large data battle in Postgres so we could work on making the table storage itself more compact with different storage engines or different tuple header representation or various details like that that work is underway. There could be a lot of work in the area of compression in all kinds of ways. So toast compression is used as an outdated compression algorithm so putting something better in there would certainly help somehow is toast applicable to a time series use case that depends if you use the schema I showed earlier where you have mostly numbers being recorded then probably not so much but in practice people also store JSON data maybe that's the JSON data that they get from whatever their measurement point is and then you just record that it's obviously not optimal at all but that's what people use so some toast optimization would certainly be useful and then there's other ways maybe to do compression to think about on a block level you can use file systems that can compress. All there's also perhaps ways to compress in a way that's specific to the data so let me explain that if you record a time series records consisting of a timestamp and then some data items that is the items that are being measured between adjacent records the data is probably not going to be very different so if you measure every minute you have that the next timestamp is only one minute different from the previous one but also the data being measured is probably not that different if you measure let's just keep it a simple case to measure the temperature again the temperature is probably either not going to change at all or it's going to be very similar so you could use some kind of a run length encoding or something similar to that and there's certainly more specific ideals there to optimize the storage of that so instead of you know recording everything explicitly timestamp value value you could optimize that somehow and say you know the first row you store timestamp value value and then the next row you just store the difference and then you don't need to use the full you know 12 bytes for you know the timestamp or the full 8 bytes or whatever you have for the values you can just store like a small difference and that could certainly have massive storage savings exactly how to represent that especially in Prozegaz how would you do that in Prozegaz what would that be is that you know a compression method is that a storage method how would you fit that into the system I don't know yet you know that sort of system is also you know would be very difficult to make updatable so that's more sort of for the archival end of the data and you would have to query it in a specific way so this would all be slower but it would be a very good way to compress things for the cold end of your data so that's something to think about so so again lots of options there for compression certainly the partition management could be improved I think partitioning itself is you know has evolved really well with the last few releases and the performance there is pretty good now but the management of partitions is has not been considered all that much so for example a very straightforward use case would be I would like one partition per month or one partition per day this is not easy to set up right now you have all the commands and there's some extensions that can help you manage that but it should perhaps be simpler and just say make the next partition or make the partitions for the next week or something of that sort that would sort of be I think that would be helpful and then on the bigger scale anything to do with charting is obviously you're currently only building blocks and you know the improvements there are there are a number of things that could be improved there so that's a long list of its own so in individual records met or less usually people are interested in the aggregated data and also older data matters less so there's a two sub points here and so the fact that the temperature reading at a specific time a week ago was that is not that interesting if you lose that one record and you have the record of you know five minutes before after that's not a disaster what matters is the aggregated data or the bulk data so that gives you certain options for optimizing crabs now this depends on the use case sometimes if this is system of record obviously you can't do that if you're recording financial data somehow perhaps then probably of other requirements but if you're just you know measuring stuff out there in the world and you're trying to make a little bit of analysis and modeling then one individual record is not the most important thing also as you know older data is usually not that important again that depends on the use case if you are doing you know historical analysis of weather patterns or air pollution patterns and you need the old data just as much as the new data but if you are observing you know your web traffic or your server traffic and you're trying to find maybe performance deviations then the new data is more relevant the old data is access less and maybe only therefore reference so how can pros and cons help with that again partitioning as I mentioned already before is good for that because then you can keep the hot data and the cold data separate if that's the kind of use case you can asynchronous commit is very applicable to this so that you don't have to you know if you have a lot of data being you know insert all the time so single row by single row you can turn asynchronous commit on or synchronous commit off the way it is said and then you get a little bit of performance boost and you know the risk of losing valuable data is slow or eliminated because you can obviously set that also per transaction so it's very very flexible and very useful for this yeah and you can also perhaps this is sort of depends on really the use case materialized views can be useful that if you just want to throw old data away and just compact it and pre-compute materialized views that's useful and you can also then maybe use tablespaces if you have sort of you know want to put sort of older data maybe on slower storage and things like that. Wombful improvement again partition management and in this case specifically the lifecycle management of the partition themselves not so much as we said in the previous point you know making new partitions but you're getting rid of the old partitions basically we have an open item that in postgres currently you can attach new partitions without a heavy lock but you cannot detach so that's well known that's being worked on that's definitely an open item and then just in general maybe you might think how about I just want to automatically throw away all partitions that are older than six months or something like that that seems an obvious use case you currently have to do that manually all the tools are there so this is you know not hard for someone to script or automate but maybe it could be easier. Also you know the management of tablespaces is just very basic so maybe there's some ideas there to improve that and if you want to use materialized views the often discussed materialized view incremental materialized view refresh could perhaps be useful so that you can still have new data arriving but instead of storing it all individually you could just update the materialized view so depending on the use case I think that's also useful and that's something that's also already being worked on so that might just come in handy. And then so finally the previous points for all like how is the database being set up and how is the data being put in there and then the question is what would you actually do with the data and that obviously really depends on the application but generally people want to do time-based analytics and usually also depending again but usually sort of quick analytics so these are not long analytics queries that might run for minutes or hours usually one sort of quick summaries of what was my traffic in the last five minutes, 15 minutes, what was my last traffic, what was my traffic per hour over the last day or two that's certainly the if you're processing log files and website traffic that's interesting but even if you are monitoring let's say industrial equipment you want to know quickly if anything is wrong or what's happening if it's over-eating or whatever the case may be so you want quick queries but you still want analytics queries basically. But then there could also be of course long you know longer running queries if you're doing again weather analysis over a longer time those could be heavy queries and very specific in domain specific queries. What you usually don't do is just a single row lookup unless it's maybe you're looking at financial data and you need to have you know need to look up something specific generally you don't use this system to look up like one record and really quickly it's mostly aggregation and then on top of that generally you want some tooling you know either to you know enter those queries or explore data or you know for any kind of visualization or front-ends or more advanced math on top of data so there's some requirements to have higher level tooling as well. So how does Postgres support this? You know obviously analytics support in SQL in general is pretty good and Postgres implements most of that most of what would be relevant obviously grouping is pretty basic. Other functions are well supported in Postgres now so that's great and that's really what makes an SQL database really useful for a time series use case compared to maybe more specialized databases that have a very limited query language that might be really quickly of specific use case but if you then want to you know break out of what they support really well then you might not have any support at all or you have to program that yourself a client side so an SQL database is you know is obviously really powerful query functionality. There is some support in Postgres for doing daytime processing but actually surprisingly not a lot and that's an area to improve in. So for example the use case that I alluded to a moment ago if you want to have you know show me my traffic which would be maybe account show me my account per hour over the last couple days. That's a pretty straightforward group like where you could use day trunk to truncate the timestamps to the hour and that would work. What doesn't work in the same way is if you want to have traffic by every 15 minutes. So how do you truncate a timestamp to around or truncate it as you might want to 15 minute intervals there's no built-in support for that. You'd have to somehow make you know you could write that yourself but it's complicated in the details so there's better support there would obviously be useful. But the good thing is Postgres is extensible especially in this area of functions and operator so whatever you need you can add yourself. But room for improvement especially what I just mentioned that taking day trunk and expanding that to support arbitrary intervals and there could also be more advanced functionality if you want to do maybe histograms based on times and things like that but there's some more basic gaps that are missing which would be relatively easy to plug so there's some work already going on on that. And then another necessary improvement in that area for that same use case give me you know information by some interval like an hour or 15 minutes. If you use day trunk or something similar this kind of this loses information in the planner so if you're in a complex query and you have sorted input what the planner thinks is sorted input or mostly sorted input and then you run you know as in the case of the time stamps the time stamps are we said are mostly sorted and then you run day trunk you know the over them the planner doesn't know anything about that and so it doesn't think that the output of that is sorted anymore and then it would have to use other plans so this could be sort of the difference between a group aggregate and a hash aggregate even though we know as implementers we know that something like day trunk would preserve the order of the input but the planner doesn't have any information on that so that would be something that's also already been vaguely discussed some time ago to add another function attribute of some sort to tell the planner that this is an order preserving function or whatever you want to call it but order preserving is probably a good way to describe it so those two first things are would be really useful for you know just supporting basic time bucketing queries and they're well within reach and not too hard so and then there's you know on the tooling side there's I think many opportunities outside of Postgres core for visualization frameworks or any kind of query construction frameworks or additional is or extended math extensions and things like that so that's not something that necessarily belongs in the postgres itself but that's certainly you know external tools for that would be useful okay so those is my discussion of those time series characteristics here's a summary of the development projects that I mentioned and proposed sort of ordered in a way that maybe I would do them this is not necessarily priority but a combination of most useful and easiest to do perhaps and a lot of those are already in progress in one way or another not all of them under the time series banner but also you know addressing other use cases but there's a lot of interesting things that could be done to make Postgres better for this so and you know as I mentioned some of these are already in progress some of them not so much so basically most of these are on my radar somehow but certainly someone else wants to look into that or collaborate that be welcome all right so to summarize so I think Postgres is is great four time series and it's like a universal database system and it has shown itself to be adaptable to many different use cases that as we've shown but you know key value stores JSON and things like that I think it's a it's a has sound fundamentals and can be extended in different ways for different use cases that said there is there are always going to be more specialized systems out there that are you have a specific use case in mind and then we'll beat any universal system we know that very well there's you know key value stores that are in memory perhaps and super optimized and you know beat any generalized database system certainly but that's not the point right that's not what we should aim for the aim is to be a general universal database system but also have facilities that cater to specific use cases and the you know lots of improvements are possible but they're all within reach and they're all reasonable they are mostly already conceived and in progress so if all of that comes together or many of those come together then be even greater time series database okay so that is my presentation at the actual conference there will be no questions so I'll look forward to answering those if you're not there then also feel free to reach out to me you know via my contact information if you want to chat about some of these development projects or you have some comments on my classification of things or you just want to chat about postcards feel free to reach out that said I hope to see all of you soon again at a postcards conference and until then take care bye and cut is here live for the Q&A go ahead Peter oh hello everyone so I'll just go through the IRC here the question of how the B3 append optimization works was already answered in the chat that's automatic that was added a couple of releases ago so you don't have to do anything about that also the point that was made in the chat about the new feature in postcards 13 that auto vacuum is triggered appropriately for insert only tables which actually it's a very good point I didn't even address that in my talk it's obviously very applicable to these kinds of workloads so that's that's a great improvement and yeah that's great and thank you for pointing that out question about about brin indexes so if you are doing updates on records that will potentially break brin indexes what would you have to do in that case so there's nothing really automatic about that to answer that question what exactly should you do in that case really kind of depends on the exact pattern of what updates you would do you know if you if it's affecting maybe your entire partition then maybe you can just we you know cluster that table possibly there's also special functions for brin indexes that can so they call brin summarize range and bring summarize new values to to make sort of specific changes there so perhaps do look into the documentation of brin indexes to find out details about that but yeah in principle in general if you do updates on brin indexes then you could run into slight problems you might have to rebuild question from I believe Vic right about anyone working on automatic partition creation so the way I would see that working is I don't know exactly what you what you envision there is the what I think we will not do is create the partition sort of at the time of insertion I think that people have agreed that that's it would create all kinds of bizarre locking concurrency problems and rail cash and that kind of stuff so if you're looking for that I don't think anyone's really working on that what I would envision as next steps would be sort of indicating at the time you create the partition table to say you know I want this to be partitioned by month and then you just have a command that says alter table or you know something like that so make me the next four partitions and then it kind of knows what pattern you want or you could also imagine that for hash partitions which is not applicable here but you could say I want 16 partitions just grade them all so that is something that I think we've looked into internally at second quadrant and we might work on that for post-cars 14 that's sort of a soft you know written a biped item other than that I don't know any other work on that question from Don dropping an old partition is the same as a detached basically so that also requires the heavy lock that's something Alvaro is looking at fixing in post-cars 14 that the initial implementation of the concurrent attached turned out to be different from his initial implementation that's why he had to rework the entire concurrent detach but that's something it's our best but yeah right now it requires a heavy lock okay what else yeah more work more stuff on the auto vacuum question from Jesper do you think that with the new feature index in 13 that's that store become should become an option for time series well that's I didn't think about that at all I think that's certainly an option to to think about depending on how you use it right if you have especially sort of very wide rows it's an option or I haven't really thought about it too much but certainly I wouldn't dismiss it now and with this with the feature deduplication I think that's what you're referring to yeah I did watch the Zet Store talk yesterday and then try to make sense of it so there's there's more learning to be done there but yeah okay more questions coming in great time scale to be extension already exists does it already implement a lot of things you mentioned could be proved yeah so time scale to be is an extension that addresses a lot of these points you know you can use it it's as a different license in post-cars so you know just evaluate that for yourself I think that's great and they're certainly you know doing a lot of good work there what I would look for is a little bit more sort of general solutions like for example the the planner so the way I describe the planner a function attributes to hit the planner the way I describe that my understanding is that time scale DB kind of does it in a more sort of hard-coded hacked way and say like this function treated that way you know that sort of thing so yeah you can use it and it's very you know it seems to people like people seem to like it but I would look in the long run for more you know so generalized post-cars like extensible solutions in all of these areas but yeah that's that's us I guess a one step in the right direction that timescale to be recently implemented and he did with with buckets four times that easy I was gonna submit as a patch yes submit that patch I don't have anything I don't know anyone else is working on that you are aware of the work by John Naylor to implement the sort of enhanced version of date trunk that I mentioned there's no other work in progress right now yeah do look at the timescale to be just you know I'm not gonna look into source code because there's licensing differences so I can't really comment more about that I'm just reading their documentation and they certainly you know asked a lot of the right questions just maybe different opinions and how the answers should be implemented let's talk about automatic partitioning automatic partitioning is sounds good but it's nobody's really working on that I missed a bunch of your answer first bit of video was cut off I'm sorry Russ your question was about the tree a bit of brin yeah brin is no it doesn't do that automatically read the documentation sorry I'm going through the IRC I should probably read the questions read the questions I like yeah okay so you re-asked the question about brins yeah the question about brin it does yeah those updates do break there's nothing automatic that fixes it you but there are functions specifically for brin to do certain adjustments depending on what you're doing question about custom type creation documentation says types are implemented using mostly C is it possible to create type using SQL language the answer to that is I believe no not right now because you you do need to basically take input string input and convert it to a byte pattern for storage so that's girl can't do that you could maybe you know one example that is probably currently not possible but you can imagine you're doing it in something like rust where you can sort of lay out bits and write them so on that level but not in just plain as girl that's not possible right now question from David Federer any wins to be had from s I am the I'm sorry I don't know what that is if anyone we still have a couple minutes if anyone got their question cut off please ask it again I'm just monitoring the IRC also a good question that I just answered from Marius 46 about wanting to convert units like the gallons there's actually an extension for that post grass girl hyphen units I believe it's called so just check that out if you're interested in that yeah I don't know if you got my answer to your question but with bucket please just submit that patch yep there you go thank you anyone know it as I am the single instruction multiple data so is that sort of vector processing I am I'm sorry I still don't know really what that is I've heard that before but I have not thought about that probably even if I knew what that was I don't have an answer but if it's sort of there's certainly further improvements in execution execute optimization possible that go beyond what I have really thought about here that's all the questions now yeah vector processing yeah yeah so I think it would I think it could definitely benefit from that that's you know goes a little bit beyond what I had thought about and obviously that would also affect you know a bunch of other it mostly analytics use cases in general so definitely also yeah maybe that those are not the sort of the top issues but yeah what's that to the bottom of the list yeah good point thank you now all right so the you know a lot of the items I mentioned are in progress as we have also just discussed now some of those are already in the next very next commit fast some of those are you know being worked on here and there if anyone wants to know specifically like who is working on what maybe you know just drop me a line and otherwise I'll you know hope to see you all soon again or I'll see you at the next commit fast thank you Peter thanks thanks Dan thanks everyone goodbye bye you
|
The term "time series" is popular (again) in database circles. What is it and what's the point? Clearly, a traditional relational database like PostgreSQL can deal with time and with series. So why is time series a special use case? In this presentation, I want to look beyond the marketing a bit and analyze what the true technical characteristics of a time series database are and what use cases drive it. Then we can analyze how PostgreSQL handles these requirements, which workarounds are handy, and where improvements would be necessary. In PGCon tradition, this presentation is both guidance for users and a call to action for developers.
|
10.5446/52150 (DOI)
|
Hello, I'm Masakusada and I'm working for Segand Quadrant. So in this presentation I'm going to talk about distributed transaction support for 48 wrapper. As the title says, it's work in progress. But I think it is to support distributed transaction by improving the 48 wrapper feature. So I'd like to start my talk with explaining what a 48 wrapper is. The FEDW stands for 48 wrapper is an implementation of a secret met. The met, MED stands for management of external data. That is, the 48 wrapper is used to manage the data that reside outside the PostgreSQL. And there are actually some, the way, including a DB link to access the data outside the PostgreSQL server. But one of the biggest advantage of FEDW is we can access the data using regular secret queries through a special type of table called the 14 table. The 14 table looks like the same as normal tables, but they actually don't have any data in the local node. If you access the data, a PostgreSQL server can, a PostgreSQL server accesses the 14 server and return it to the client. So from the client point of view, those data looks like it is on the local node. And the 14 data wrapper is a playable architecture. So you can find many FEDW plugin, FEDW implementation in the FEDW plugin, FEDW, for the wrapper page in the PostgreSQL. And in addition, 14 tables are writable from the version number three. So we can execute update, delete, insert on 14 tables. And this diagram shows an example of using three types of FEDWs, PostgreSQL FEDW and MySQL FEDW and local FEDW. The client issue the query to PostgreSQL server, I mean the left PostgreSQL server, and this PostgreSQL server connects to the external data store like PostgreSQL server, MySQL or local database through the corresponding FEDW plugins. So in this talk, I'd like to focus on the transaction management when using the FEDWs. So as of today, FEDW plugin is responsible for transaction management on the remote node. I mean, FEDW needs to begin and commit and or load back a transaction on the FEDW server. Also it needs to support other transaction management commands such as save point or possibly prepare. And as a PostgreSQL score, currently it doesn't provide a dedicated way to manage foreign data foreign transactions to FEDW prevent. Most FEDWs which want to control the transaction need to register its own callback function to exact callback. The exact callback called before committing the local transaction. So let's take a look at how FEDW manages foreign transaction by holding the PostgreSQL FEDW up as an example. So PostgreSQL FEDW opens and begins foreign transaction when it accesses the remote node first time during execution. In terms of isolation level, if the local transaction isolation is serializable, PostgreSQL FEDW also opens serializable transaction on the remote node. If the local transaction isolation level is not serializable, it uses repeatable reader essential level. This ensures if a query coming from the client performs multiple table scan on the remote server, it will get snapshot consistent result for all the scans. So this behavior seems good in some cases, but there are two major issues. So atomic commit, problem and real issues. And these two issues are what I'd like to focus on this presentation. So let's get into the details of the atomic commit we should first. So as of now, foreign transactions are committed one by one before the local transaction commit. So there is no guarantee that guarantee, they're all servers including local, committed or all backed. So I believe the next slide shows a problem with a diagram. This diagram shows the transaction commit procedure when using the FEDW. So there are two remote nodes and client writes the data on those nodes through the FEDWs and issued a commit. Local node commits foreign transaction one by one, and then the commit to the local transaction and returns the result to the client. So this seems to work fine. That should be fine in, that should be fine I think 95% cases. But what happened when this happened? Before committing the transaction on remote server 2, it had some problem. It could be anything, server crashes, network patch, whatever. In this case, the local transaction can roll back its transaction, but it's already committed the foreign transaction remote node one, remote one node. So the outcome of this transaction is the transaction is committed on the remote one and the rollback on the remote one, rollback the transaction on the local node. But we have no idea about the transaction on remote 2. So that is the problem which we want to avoid when using the foreign servers. But please note that data on the individual node can keep consistent even in this case, as long as we use the transactional database on the remote node. But on the other hand, from the global point of view, data is inconsistent if this kind of failure happens. So to resolve this issue, we can use the two phase commit protocol, which is one of the most famous consensus protocol. The two phase commit consists of the two phases as its name suggests. So the previa phase and the commit phase. This protocol always starts with the previa phase. In this phase, coordinator sends a previa request, prepare message to the old parchment. And in the commit phase, the coordinator sends the commit request to the old parchment or remote node. If old parchment sent OK response in the previa phase. If even one of the parchment sends no response or energy response in the previa phase, coordinator sends a rollback request to the old of them. The first proposal to support the two phase commit to foreign developer transaction management was proposed at the 2015. And I joined it in 2017. And currently I'm proposing the Sampath state to the Postgres development community. This feature is not committed yet. Any version, it's in progress feature. But with this feature, the Postgres will manage the foreign transaction, I mean, the transactions that open on the foreign servers. And new F3W APIs will be introduced, the commit rollback, prepare and get prepare ID. So the F3W which want to support transaction can implement two APIs, commit and rollback. And in addition, if they want to support the atomic commit as well, it needs to implement the prepare API in addition to these two APIs. And get prepare API is optional. So I'm going to deep dive to how Postgres core manage the foreign transaction with this feature. The transaction commit procedure will change. Firstly, it does prepare all foreign transaction and then commit locally. And finally, it commit all foreign transaction prepared on the foreign servers. At the first step, the core persistent information about the foreign transaction to disk via a call, so that the foreign transaction information can be recovered after restart. These calls includes which foreign servers involved with which local transaction. So in other words, we will end up with, end up recording the world at each phases. At the first step, we persist the foreign transaction information. And the second step, we persist the local commit, the commit to record in the local node. And at the third step, commit prepared also persist the commit to record of the transaction on the foreign servers. And the core persistent information about foreign transaction to disk via world record, so that this can be recovered after restart. So I just already mentioned about that. So I mean, we can recover the information about the which foreign servers might have the prepared transaction. That's where we can terminate the prepared transactions on the foreign server, even after crash. Also, I introduced a new background worker called Transaction Resolver. Transaction Resolver has two responsibilities. One is it executes commit prepared message, it executes commit prepared in progress foreign transactions. I mean, that is, the preparing foreign transaction and committing prepared foreign transaction are performed by different processes. And another one is to resolve recovered foreign transaction or in-depth transaction. I will explain them for details in the rest of my presentation. So this diagram shows how possible is commit the transaction using the two-fuse commit. When client issued a commit, the local node send a previous message to all remote nodes. The remote node prepared the commit their transaction, but modified it is not visible for the transaction yet. And then local node does commit locally. After the local commit, the process for received the commit request from the client takes over to the transaction resolver process, presented the TR in this slide and the process that waits. The transaction resolver process which is foreign transaction information from the shared memory and send commit prepared message to all involved foreign servers. After the committing all prepared foreign transactions, the resolver process releases the waiting backend process and commit prepared makes all pending data make makes all pending data visible for other transactions. So with this feature, the backend process does the prepared phase of the two-fuse commit, whereas the transaction resolver process does the commit phase. So in this scenario, after the local node prepared one transaction on remote one, the remote two node fails to prepare transaction, let's say remote node two crashed before doing the prepare. So as the local node failed to get OK response from the remote two is turned to the rollback. So it does rollback the local transaction and sending rollback prepared message to remote one node. So as a result, all transactions successfully rollback. So the next scenario, the local node prepared foreign transaction on two remote nodes and did local commit, but the crash after the local node. So during restart, the local node recovers the foreign transaction information. The point here is the old foreign transaction information had been well logged before preparing on the remote node. So that information is recovered during the recovery. So then transaction resolver launches and it sends the prepared message to all remote nodes. So as a result, all transactions are successfully committed in this case. So please note that in this case, all transactions should be committed. It should not be rollback because the local transaction on the coordinator is already committed. And as of now, the all steps are performed synchronously. So as two phase commit is a blocking protocol, if even one participant doesn't work, the protocol is blocked. So so client might want to cancel the transaction during waiting for a transaction resolver. So even in this case, the client can cancel, cancel waiting safely, safety. When it when client requests to cancel, the local node return to the prompt to the client while leaving the work to the transaction resolver process. So this is reason why preparing transaction and committing prepared transaction performed by different processes during commit during committing the local during committing the prepared foreign transaction, it can cancel writing anytime, because it's just waiting, not doing some doing something that I can read on error. If it did something that can be can lead can lead on the error after the local commit, it would end up the raising an error, but it's too late to change overall back, because the local transaction is already committed. So the by introducing this future, we can resolve the atomic point to issue I described. I described after atomic commit to future, solve this problem from the perspective of durability, but there will be perspective of atomicity, but from the perspective of acceleration, there still is atomic visibility problem. So atomic, so transaction is satisfied that the atomic visibility property, if either, either all one none of the each, each transactions updates are observed by other transactions. The foreign transactions are committed by transaction resolvers, but there is no guarantee their commit perform at the exact to be the same time. So therefore, if a transaction start between those commits, it takes a snapshot, which includes the part of the result of a disability transaction. So that is atomic visibility problem. So either all and no of each transactions updates should be observed by other transactions. There are some solutions or techniques for atomic visibility issues, but I'd like to introduce other lead issue that can happen when using the POSSF-EW. And I believe one of the most important goals of 3W is if the client uses POSSF-EW with the foreign server, it needs to function the same way as the single POSSF-EW server would do. In other words, it's really perfect if the client can use POSSF-EW with foreign, foreign data wrappers while not being aware of the foreign data wrappers at all. But unfortunately, in terms of the transaction, client POSSF-EW doesn't work, even if a transaction involves only one remote node, especially when the lead-write makes the work load. So before going to the concrete examples, let's review some transaction anomalies. The shaker standards defined for transaction isolation levels lead, commit, lead, and commit, lead, commit, lead, and share liable. In this talk, we focus on lead, commit, and repeat, and repeat. So non-repeat lead anomaly is when transaction leading the data, it finds that data has been modified by another transaction. That is suppose you did a select counter after from table 1, and let's say you got the 100 rows, and there was a concurrent transaction which did the 10 rows. In the same transaction, if you did the select counter after from table again, you will get the 19 rows if the concurrent transaction has had committed. It's non-repeatable lead. It's non-repeatable lead. It may occur in lead-commitment isolation level. And with POSSF-repeatable lead isolation level, the front-end lead won't happen. In POSSF-repeatable lead, all leads within the transaction is data using the one snapshot. Therefore, the result will not change even if you executed the same query multiple times, and there was a concurrent transaction which modifies data. So, okay. In this example, the two clients issues the one SIG up to the local POSSF-repeatable server, which connects to the two remote nodes using the POSSF-repeatable. So the one client starts, the first client starts a lead-commitment transaction and gets the number of results, the number of tuppers of the table, whose data is actually started in the remote server. The result is 100 rows in this example. When POSSF-repeatable access to the remote node, it starts repeatedly a transaction. This is a documented behavior as explained before. And before the client fetches the number of the tuppers again, another client dates the 10 tuppers and the commit. As the first client's transaction on the remote node has opened in the POSSF-repeatable lead isolation level, it returns the same result. When the first client fetches the same query again, it gets 100 rows again. This is strange. This behavior is non-repeatable lead. It can repeat a lead even the transaction starts the lead-commitment transaction. Because the first client starts the transaction in lead committed isolation level and another lead transaction is committed before leading the data again, it should get 90 rows instead, but it gets 100 rows. So, okay, then let's next use the repeat a lead isolation level in the local transaction. So, in this example, two clients access two tables that are located on the different remote nodes. BOSS table has 100 rows each. And one client, the first client starts a repeat a lead transaction, then gets number of tuppers of the table one and get 100 rows as a result. Before the client access table two, other clients delete 100 rows from the table two and commit. And when the first client leads the table two, it gets 19 rows as a result. Because it's the first time for the first client to access the remote node two, it starts a new transaction on the remote two server. But the result 90 rows is strange again, because it's like the non-repeatable lead, even the first client starts the repeat a lead transaction. So, it should get the 100 rows instead. So, I described how clients get different results than when using the single possible server. There is no guarantee that the cluster returns the consistent result among all the foreign servers, even when there is only one foreign table. To provide a consistent lead result, each node needs to see its own data with a globally consistent snapshot. The snapshot could be anything. So, currently Postgres uses a set of transaction ID, XID as a snapshot. But it's more commit sequence number or time stamp can be snapshot in principle, although current Postgres doesn't support them. And the point here is, the old participant uses globally consistent snapshot to see the data. So, Postgres Excel, which is the for-quib version of Postgres, achieved this by employing the global transaction manager node, which is the separate node responsible for providing the global consensus snapshot to all the nodes. So, I mean, therefore, all transactions need to access GTM, global transaction manager node, to get a snapshot whenever it begins or commits or rollback the transaction. Similarly, Google Park writer has a similar concept called time stamp worker. Time stamp worker produces that time stamp is historically increasing order. So, node gets the time stamp from the time stamp worker and use it as a time when lead or write operation happens. So, in this case, the time stamp is a snapshot. But a big downside would be central transaction manager could be single point or failure. We will end up requiring needing the secondary node for GTM node. So, yeah, it's also at cost. So, this is the one approach to provide global consensus real result. But actually, there are various techniques and solutions from academic papers, as well as commercial databases. So, many professionals research this area. So, in the rest of my talk, I'm going to introduce one technique. I picked one technique called clock SI. So clock SI is proposed at 2013. And the implementation of clock SI was proposed two years ago by Stas Kirovich. It's still under development. It's not committed to any possible version. So, basic idea is that each participant uses its local time stamp as a commit sequence number. So, each node uses its local time stamp to see what version of data they can see. So, of course, the local time one is not different. So, we cannot expect this time stamp always shows that it's actually the same time. So, but clock SI solves this problem by having a leader wait for the time to be synchronized. So, I'd like to briefly introduce how clock SI prevents clock issue. Suppose there are two nodes that has a local clock. And the local clock in node B is slightly behind by a setting amount of time. The transaction one starts at time stamp t and lead item x located on node B. The lead request from node A arrives at node B at time stamp t dash, which is slightly, still slightly behind the time stamp t. As node B, instead of leading the item x in mediatory, the lead request for item x waits until the local time of node B to reach time stamp t. So, this is how clock SI prevents the clock skew issue. So, besides this clock skew issue, using local time stamp as a snapshot or using time stamp as a snapshot has several challenges. But this paper solves these issues by interesting and simple approaches. So, this paper solves these issues by interesting and simple approaches. So, I'm not going to introduce the algorithm, this algorithm further for more detail, but it's very interesting algorithm and it's very interesting paper, so please read it for details. So, one of the biggest advantages of this approach is that there is no single point of failure, which is very great. And there is no central management component. So, maybe you've read the Google Spanner paper or you've had the Coca-Cola DB and they have a similar concept. But maybe downside would be the transaction latency depends on the clock draft. So, just a quick recap of my talk. The following data wrapper is the powerful feature to access the distributed data across the heterogeneous data stores. And given that one of the most important goal is of the following data wrapper is if the client use the post-res, we use the server, it needs to function the same way as the single post-res server would do. A big missing piece is the, a big missing piece is the transaction management. So, several ideas are proposed and still under development. And to PC, the two-phase commit over for the data wrapper and progress side, the both patches are under development and in progress. That's all. Thank you for listening my talk.
|
PostgreSQL has Foreign Data Wrapper feature and it is the powerful feature to access the distributed data across heterogenous data stores. FDW became writable at PostgreSQL 9.3 therefore PostgreSQL with FDW has potential to become distributed database supporting reads and writes. However one of the biggest missing piece is transaction management for distributed transactions. Currently atomicity and consistency of ACID properties are missing but are essential to achieve full ACID supported distributed transaction. Some proposals have been proposed but these are under discussion. This talks about the current status of FDW and problem regarding atomicity and isolation and introduce to the proposed solutions and other solutions employed by other distributed databases. Also I'll also explain the use cases like database sharding and federation.
|
10.5446/52152 (DOI)
|
Hi, I'm Melanie. I work on Postgres at VMware. Hi, I'm Jeff and I work for Citus Data and Microsoft. So recent work on the Postgres executor has improved how it does memory management from accounting for the memory used to controlling the amount of memory that's consumed during execution. And so today we want to talk a little bit about the motivation behind wanting to understand and measure memory usage and to control it. Then we'll discuss memory accounting in general and how the execution operators hash aggregation, sort and hash join can adapt at runtime to stay within their allotment. Why is work mem important? Work mem, of course, reduces the risk of out of memory events. It also improves the predictability of a system under stress and it allows the planner to choose the best performing plans. Out of memory events on a system can cause availability problems for users, of course. On Linux, it can result in killing a process, the Postgres process with a signal 9, which also limits the ability to diagnose the problem. That'll require engineering and DPA effort to investigate, which is especially problematic for cloud services and large deployments. It also helps with the predictability of a system under stress. How to do work mem, memory pressure on the system could lead to swapping or paging. That'll cause a dramatic slow down because swapping paging is a highly random IO pattern. Then queries will take longer to execute and therefore hold on to their memory for longer. Then, of course, queries will pile up trying to use even more memory and the system just falls apart. Work mem can help avoid all of this. Work mem also allows the planner more freedom to choose the best plan. If some operators do not have a strategy to stay within work mem, the planner must avoid choosing that operator when it expects it to exceed work mem. For instance, hash aggregation in versions 12 and earlier may not have been chosen for exactly this reason. This can lead to suboptimal plans. But if all the operators have a strategy to stay within work mem, the planner can choose the best plan based on cost without as much risk of an out of memory. For instance, hash aggregation version 13 later solved this problem. This will result in better plans and better performance. So what is work mem? Work mem is just a setting. There's no global enforcement mechanism or other global concept. Each operator is on its own to use its own mechanism in an effort to stay within the limit for itself. Work mem is limited in its ability to control the overall system memory usage. It's a very local concept. When you're looking at the overall system memory usage, it's going to be some multiple of work mem. Consider the number of operators in a given query tree. For instance, a single query, the plan might involve several operators, each which might want its own memory allotment from work mem. For instance, a hash join under another hash join. This can multiply the total system usage. Similarly, concurrent connections. Consider their work mem independently of every other connection. There's no global concept across connections for work mem. But if Postgres implements admission control later, that could help here. There are also a lot of allocations in Postgres where work mem is just not enforced at all. For instance, parsing query text, processing of large datums, or hash aggregation in versions 12 and earlier. Each operator needs to implement its own mechanism to conform to work mem. These include disk based sort, disk based hash aggregation, hybrid hash join, adaptive hash join. I'll be discussing sort and hash aggregation and then I'll pass it to Melanie who will discuss hash join. Let's start with sort. Sort collects tuples in memory from the input until it fills up the work mem. Then it sorts the in memory tuples and writes them out as a sorted run. Then collects more tuples from the input, sorts that, writes it out into a new sorted run and repeats until the input is exhausted. Then it will switch to merge mode where it takes these locally sorted runs and then combines them into a globally sorted final output. So you can see here the first portion of the input is collected and sorted and written into sorted run one, then the next portion of the input is collected, sorted, and written to sorted run two. These runs are written sequentially which is good for performance. Then we have the locally sorted runs and we merge them into a final globally sorted output. If the original input was random we can expect this to have a random pattern reading from the individual sorted runs to produce the sorted output. In some cases if too many runs are trying to exist to be merged at once then it will produce intermediate sorted runs in the process. Now we'll move on to hash aggregation. I have here a demo showing the difference between hash aggregation in version 12 and version 13. Here I'm going to execute a simple query with explain analyze. This is a group by query. It will be executed using hash aggregation. Explain analyze in version 12 does not show the memory usage of the hash aggregation operator so we're going to have to look at the top of the screen. You can see process 12.91 is growing rapidly so it's staying nowhere close to its limit before megabytes of work memory. It looks like it's approaching 2 gigabytes. So what happened here? Well the planner expected only 200 groups from the input data but it actually produced 20 million groups. So what the planner thought would only produce 200 groups it shows hash aggregation because those 200 groups would easily fit in the in memory hash table. But when it got 20 million it just blew past the limit. We didn't see any major performance degradation here because the system memory is still enough to accommodate this query. You can easily imagine that if it did exceed the system memory it would lead to swapping horrible performance lots of random IO and then potentially a system wide out of memory failure. Let's compare this to version 13. We'll execute the same query on the same data. We're going to look at process 12.94. The memory is stable at around 12 megabytes looks like. No giant memory spike so things are looking good so far. And great it reported the peak memory usage at 4 megabytes which is the close to the work mem setting it made the same planner error but was able to keep it within that 4 megabytes. The same data of course. So what happened? Well the groups that couldn't be processed immediately were instead spilled intelligently to disk in a partition fashion. Then the partitions were picked up one by one and reprocessed to produce the final output for each partition until all the partitions were processed. We can see that the run times are approximately equal and that's also great news because it means we didn't give up any performance but we got much better behavior with regards to memory usage. Version 13 will degrade much more gracefully in the presence of concurrent queries, many operators in a plan tree or other memory pressure on the system. So let's see what happened. So to recap the in memory hash aggregation algorithm it breeds input tuples and finds the group key. Then it uses the group key to look up in an in memory hash table to find the group. If the group exists already it will advance that group. For instance for account aggregate it's just going to add one to the group's count and memory it's not going to store that whole tuple. And if it's not found in the in memory hash table it's going to go ahead and create a new group with the initial value and advance that group. And then it's just going to proceed until all the input is read. All the group states are held in the in memory hash table and then it's going to finalize and emit the groups from the hash table again. So let's see that in action really quick. So here we have an input file and we're going to assume here that workmen can only hold a hash table with three groups. Any more than that would exceed workmen. So we can see the first tuple goes to group two, then zero, then three, you're following the lines closely, nothing anyway. And then we go back and we end up adding another tuple to group three, then another tuple to group zero and then another tuple to group two. So each group is going to end up representing two tuples. So the group size is two. And we're going to have a hash table that contains groups representing all the data from this input. Now we can go ahead and finalize that output, which just means put it into a form that's ready to emit as new tuples and then output it from that operator. But what about when we run out of memory? So here if the hash table grows larger than workmen, in this case holding any more than three groups, what we do is we stop emitting new groups into the hash table. This only happens in version 13 and later. If a tuple has a group key that doesn't match any of the groups in the hash table already, don't create a new group. You have three already, don't create that fourth group. And divert the tuple that would have created that fourth group to disk to be processed later. This is called spilling. The tuples written to disk are written in a partition fashion. When all input is read, we know that all of the groups that are in memory in the hash table represent all of the tuples for those groups. So in other words, we don't have some of the tuples for group zero and memory and some on disk, either they're all in memory or they all got written out to these partitions on disk. So when all the input is read, those groups that are in memory, we go ahead and finalize and emit those groups. Then we clear the hash table, pick up one of the partitions, use that for input, and then use that to refill the hash table. Then again, those groups that are represented in memory represent the entire groups, so it's okay to finalize and emit those. Then we repeat. So let's see this visually. Here we first add tuples, you know, tuple group zero, then two, then three. Then we have some more data in this example. So we go back up and so then we have here a tuple from group five. A tuple from group five would create a new group, but the hash table can only hold three while fitting inside of workmen. So we're going to divert that to a partition instead. But then later we get another tuple and this one belongs to group two. That group already exists, so it's okay. Go ahead and update that group state in memory. We're going to advance that group, same with the tuple from group three. That already exists in memory. Go ahead and advance it, and same with group zero. Go ahead and advance that group again. Group zero, two, and three still each represent two tuples, group size of two. And so those still are complete groups. We're going to continue reading the input and then we're going to encounter a tuple from group seven. Group seven, that would create a new group. So we can't do that. You've got to divert that to the spill partition. So here we have again, all of group zero is represented in memory. All of group two is represented in memory, all of group three. None of the other groups are represented in memory at all. We're going to assume here that groups five and seven both fall in partition zero. So here, since again, these groups represent all of the tuples belonging to those groups, it's okay to go ahead and finalize these groups and emit them. We'll never see another tuple from group zero, two, or three again. We go ahead and emit those. Everything is fine. Now we pick up partition zero, holding those tuples from groups five and seven. We go ahead and process that. We add a tuple. We create group five. We create group seven. These are still just two groups. That's fewer than what would fit in work mems. So everything is fine. We've got all of group five and all of group seven. So we can go ahead here, finalize those, output those, and then we're done. So there are a few complications as you might imagine. So one of them is what if merely adding to an existing group actually increases the amount of memory used in the group state? For instance, array egg. In our example, we were looking, imagining an aggregate such as count, where adding to the group state just meant incrementing a counter. Similar with sum, you just add to a number. Typically these don't increase the represented size in memory. So it's okay. After you've exceeded work mem by advancing an existing group, you're in the case of count or sum, unlikely to exceed work mem even further. So in the case of something like array egg, this takes values from the input tuples and appends them into an array. The array, of course, will then necessarily grow in size with larger group size. And this can mean that existing groups already in the hash table, when we're advancing those groups, we're actually increasing the memory footprint. We wait until we're already full of the hash table already is at the size of work mem. And then continued adding on to these existing groups, we would significantly exceed work mem. But the solution here is to use planner estimates to try to leave room for growth. So here, if you imagine, we're taking the first few tuples. And here one goes to group zero, then group two, then group zero, then group three. And here it looks like we still have some free space left. Maybe we can fit that fourth group in work mem. But we come along to group five and do a tuple from group five. And then we decide, no, actually, we're not going to put that in the in memory hash table. We're going to divert that, even though it looks like we have room. We've actually set a limit to prevent the number of groups growing too much if we expect the existing groups in memory to grow. So what we're going to do is we're going to recognize that even though we're not at the work mem limit, that due to the number of groups we have, we're likely to hit that work mem limit soon. So we go ahead and divert that tuple from group five to a partition, just like we did before. Then later, when we're processing the rest of these tuples in the input, then we encounter, as expected, more tuples from groups two and three than zero. These end up filling out and enlarging those group states. And now we've actually used all of the available work mem. So now we're at the limit. Then we come along and we find a tuple from group seven. We're at the limit. And of course, we need to divert that tuple as well. So you can see here that we essentially preemptively diverted some tuples when approaching the work mem limit based on the number of groups we had. We estimated that perhaps we are likely to reach the work mem limit just based on those existing groups. And then prematurely entered this spill mode where new groups are not created. This allows us to accommodate aggregates of the type similar to array egg where the group state can actually grow with increasing group size. I should note that this is not a perfect solution because planner estimates can be wrong, of course, but this is a narrower case than all of the aggregates as before. And then it also mitigates the problem of substantially. The hash aggregation work also introduced some infrastructure improvements. First of all, memory accounting for the memory context infrastructure. This will hopefully be usable by other operators in the future to get a more accurate picture of the memory usage that takes into account fragmentation and that kind of thing. It also made some improvements and adaptations to the logical tapes infrastructure originally designed for storage. So with that, I'm going to pass back to Melanie and she will discuss hash join. Now we're going to talk about hash join. Hash join is a memory intensive operator. It's a method of joining where you take one relation, load it into memory and build a hash table on it and scan the other relation and probe that hash table in order to join the tuples and omit them if they match. So it takes memory in order to actually build the hash table. Not all data types are hashable and not all operators are hash joinable. You usually need some type of equality. But hash join has good performance in the average case better than nested loop join and merge join. So it's pretty performant and it's usually worth if you can hash join, trying to hash join. Hash join has two phases. Loading phase is when you take the tuples from the underlying relation and load them into memory and here you're going to hash the tuples to individual buckets. The number of buckets is based on planner statistics and it can be increased during planning or execution and it's always a power of two. So you'll take the tuple and then hash it to a particular bucket and then store it there. Then you'll scan the outer relation using the same hash function, evaluate it on the join key and load that into the, sorry, and probe the hash table with it. And if it, in this case, it doesn't land in the same bucket as the inner side. In this case, it does, but the join keys still don't match. So in cases where you have too much data and doesn't fit in memory, it's, you can still do a hash join. So what you do is you take all the data from the underlying relation and you actually just batch it up into smaller files and you use the hash value to do that so that you can join batch to batch partitions of the actual data. And at that point when you make batches, you shouldn't change the number of buckets because we're actually going to use the same hash value for bucket and batch. So in this case, we're going to actually build the hash table and the initial build stage, we take the underlying relation and we load it into the hash table. And so here you can see that tuple is hashing to batch zero and then to bucket two. And then we're going to build, if we have four batches, a tuple could also go to a batch file. And then based on the hash value, this tuple is going to batch one and then when it's loaded in a memory, it'll go to bucket two into the hash table. So once you've actually finished one scan through the whole underlying relation, you will have finished the initial build stage of the hash table, which is batch zero. And you also would have finished scanning the outer side. And at that point, any tuples whose join key with the hash function evaluated did not hash to batch zero would have been saved into a batch file. And that includes the outer side, right? So those will hash to a particular batch. And so any that didn't hash to batch zero, you need to save them in order to probe later batches. And so here is where we actually load batch one in. And now the hash table contains the tuples from batch one. And then we would do probing. So probing for multi batch hash join works similarly. In this case, we've already loaded and probed batches zero, one and two. And then now we've loaded batch three of the inner side. And we're going to probe it with the tuples from the outer side of batch three. So how do we get the number of batches? Well, Planner will estimate the number of batches based on statistics. But this is limited by the statistics being up to date and also the fidelity of the statistics. So things like MCVs can maybe not tell the whole picture. I showed the whole picture. So executor can increase the number of batches during planning if the estimate was wrong. So the estimate could be wrong because the number of tuples is such that they could never fit in memory. And so you have some evenly sized batches and roughly evenly sized. And you do them one at a time. But also you could have data skew where there's a particular batch or batches where lots of tuples are hashing to that batch and that batch can't fit in memory and would exceed the space allowed. So now we're going to talk about the implementation details comparing serial to parallel. This is just a table summarizing it, but there's some important key differences. So in serial multi batch hash join, the executor can increase the number of batches during the initial build stage. So that's scanning the underlying relation and building batch zero, building the hash table. So here we ran out of space in batch zero and so we double the number of batches and now we have this batch one file. Also in serial multi batch hash join, the executor can increase the number of batches anytime that it's loading tuples. So in this example, we finished loading and probing batch zero. We happened to end up with four batches and now we move on to loading batch one. So as we're loading batch one, we run out of space. So we thought we only needed four batches. We had started probing already, but we actually need more. So we double the number of batches. And so now we have eight and for batch one, any top, we have vict all the tuples from the hash table, any that, and then we load them back in. And based on the new number of batches, some tuples will no longer hash to batch one and will actually hash to batch five based on considering one more bit in the hash value. So this is done lazily in serial hash train, which is important. So in this example, so remember the equation for determining the batch number involves the shifting off the bucket bits and then ending it with the number of batches minus one. So when we have only one batch, it's just batch zero and memory. We don't really have to think about the batch number. But now let's say we run out of space with in the hash table. So we're going to double the number of batches to two and then evict the hash table, tuples from the hash table and then load them back, any that do not hash to batch zero, hash to batch one. And so we can see that some tuples have hash to batch one. So we'll finish loading batch zero and now fits in memory and we're going to probe it and then finish emitting tuples and we're done with batch zero. Now we try to load batch one and it turns out it's too big. So in that case, we'll double the number of batches again and evict the tuples same thing and then load them back in. And now once we've doubled the number of batches, we actually get batch three evict the tuples, some tuples will hash to batch three. Now the important part is that because we finished loading and probing batch zero before we double the number of batches, we don't actually need to make batch two when we double the number of batches to four because there are no tuples from batch one that can go to batch two based on the hash value. So even though the total number of batches is four, that's not the number of batch files that we'll ultimately have. And you can see again, when we double the number of batches, again, we get just one more additional batch file even though we have technically eight batches, we have these just few files. So one thing that can happen is that if you have data skew, some tuples, you might have a situation where no tuples are relocated even after we double the number of batches. So in that case, there's no point in continuing to double the number of batches. So we disable growth globally. Disabling growth has its downsides too. So it could be that when we are considering two bits that the tuples that have all the tuples hash to batch three won't fit in memory and we consider three bits, no tuples actually move to batch seven. However, once we start considering four bits, we could have actually moved some of the tuples into batch 11, allowing batch three to fit in memory. By disabling growth globally, we didn't have the opportunity. So we basically just couldn't load. We exceeded the work mem and exceeded the space allowed when we were loading batch three in. So now switch gears to parallel multi-batch hash join. So in parallel multi-batch hash join, the difference is that the executor can only increase the number of batches during build. So that's when it's scanning the underlying relation and loading tuples into the hash table. Only at that point is it able to change the number of batches. So here you have workers collaborating on loading tuples into the hash table, building batch files, and probing. So here two workers are building the hash table, run out of space, and they double the number of batches. So what they do when they actually increase the number of batches is they actually evict all of the tuples from the hash table and from all the old batch files, destroy those, make new batches, and relocate all the tuples to the appropriate batch. That means that tuples are not lazily relocated. They're relocated proactively or preemptively during this build phase. And that's in contrast with the serial case where a batch file was only created once a tuple needed to be saved there. Also after building the inner batch files, parallel hash join will build corresponding outer batch files, scan the outer side, and build those batches. So similarly in parallel hash join, multi-batch hash join, there's a global growth disable switch that can be switched on and then growth is disabled. So here there were two batches. We doubled it to four and tuples relocated from batch zero to batch two, but no tuples relocated from batch one to batch three, so growth was disabled. Now we're called with there some downsides to this, so potentially if we had grown again, we could have repartitioned the data and actually had a batch that did fit in memory and didn't exceed work mem. However, if you disable growth too late, there's a downside too. So in this case, batch three was, we ran out of space when loading batch three. We doubled the number of batches and only one tuple moved to batch seven. And then we do the same thing a few times. And we end up batch 11, batch 19 all only have one tuple because at least one tuple moved. We didn't disable growth, but now we have lots of files that only have a few tuples. And so you have to incur additional rights and the buff file overhead. Each buff file, each file has a buffer and the overhead of that can exacerbate a memory constrained situation. So going back to other, so potential solutions for this data skew problem. You can reverse the join order. So take the outer and inner swap them and build the hash table on the other side. This has some downsides. So first of all, you can mostly would have to do this during planning. It would be very difficult to switch strategies in the middle of batches. And another thing is that you're assuming that both sides are not skewed. So you might do this and then it actually is not helpful. So if the original query was a left outer join and you switched it to a right outer join that would not is not parallelizable. You could also pick the other types of joins that exist in Postgres. Nested loop join, merge join. But again, these have to be chosen during plan time switching strategies completely is difficult during execution time because tuples may have already been admitted using one strategy. And Nested loop join performance can be much worse. So not all data types, some are hashable but not merge joinable or merge sortable. And some operators are hash joinable but not merge joinable. You can also use hash function chaining. So if you're not able to partition the data using one hash function, you evaluate the join additional hash functions on the join key until you're able to partition the data. This is a problem because the hash functions are stored in the catalog for each data type. So if you wanted to add additional hash functions, you would have to do so for all of the query data types and then users would have to do it as well. And it's a little bit limiting since hashrine is such a common join type. So block nested loop hash join is a proposed solution that is there's a patch set on the mailing list right now. And I've been working on it. You basically take a batch that has data skew and split it up into arbitrary stripes of tuples, the inner side, and load one stripe at a time into the hash table, probe it, and then do the next and the next. So you only get this block nested loop join performance for the batches that are skewed. The other batches still have the normal hash join performance. So basically it works like this. You have some threshold and you say that if fewer than that threshold number of tuples move from a batch when you've doubled the number of batches, then you mark the batch that retain the tuples as a fallback batch. So in this case, batch three, we noticed that it didn't really help to double the number of batches. So we mark it batch three as a fallback batch, whereas previously we would have continued to double the number of batches even if tuples were not relocated. So let's talk about the serial implementation. So in serial hash join, while we're loading the batches, if a batch would exceed the spacial out or the exceed work mem, so here batch one exceeds work mem, we're going to double the number of batches from four to eight. And now if no tuples go to batch five, the child at batch one, when there are eight batches, then we would mark batch one as a fallback batch and process it in stripes. So when we probe it, we would load just stripe zero, so it's loaded, and then we probe it with the outer batch one file. And then we reset the hash table, load stripe one, and rewind batch one outer side and probe the hash table again. For parallel, it's a little bit different because all the batches have to be built during the initial build stage, but somewhat similar. So when doubling the number of batches, there's a phase where parallel hash join will evaluate the size of all of the batches after doing this repartitioning. And if they still wouldn't fit in memory, then I can decide at that point whether or not to disable growth. Well, instead of disabling growth, we just checked, did all of the tuples from a particular batch stay in that batch? Okay, if so, or it did more than a certain threshold stay there, then we can mark that as a fallback batch and process it in stripes. The important thing to note is that for parallel hash join, because the workers will be working together, we actually write the stripe number that a tuple belongs to in the minimal tuple header on disk so that while loading, when loading, the workers don't need to know what order the tuples are in or what stripe they belong to. Implicitly, it's explicitly in the minimal tuple header. So probing works similarly. Workers work together to do the probing. They'll probe batch one stripe zero here before moving on to batch one stripe one. The big challenge that we faced in implementing this was left outer join. So left outer join semantics are that a tuple is emitted as with as an unmatched tuple is emitted with the null on the right join key, only if it has no matches in the whole inner side. So when you're probing batch one stripe zero, even if there's no match, there's no guarantee that there isn't a match for that same outer tuple in stripe one. So you can't emit it as an unmatched tuple yet. You have to wait until you've seen all the tuples in the batch. So the solution that we came up with was to keep a bitmap and because it's memory constraints situation, keep it in a file with a bit for each tuple. And when probing, if the tuple matches a tuple in the hash table, mark the corresponding bit in the bitmap. And then this is cumulative. So now rewind the outer batch, reset the hash table, load batch one, batch one stripe one. And if we see another match, mark that corresponding bit. So at the end, one more time, we'll go back, rewind the outer side batch file. And then for each bit in the bitmap, if it's not set, then we need to emit that outer tuple as unmatched. Jeff and I have shared our slides here at this URL. And in the slides, there's some additional resources and bonus content. Now Jeff and I wanted to have a discussion about your thoughts on workmen and we can take any questions. Okay. The first question is, is workmen allocated all at once or as needed? Workmen is allocated as needed. It's not allocated all at once. Okay. And I think that that was really the only question. Hmm. Do you have anything that you want to add into the end of the talk that maybe was, that came up during discussion? Maybe not. That's okay. Okay. Thank you for coming back for the Q&A. I appreciate it.
|
Recent work on the Postgres executor has made improvements to memory management -- from accounting for the memory used to responding to memory pressure. It is important to bind the memory usage of the database with the appropriate execution mechanisms and to choose those during planning based on cost in order to meet users' expectations and ensure predictable performance. This talk will cover three such improvements: the addition of memory accounting information to MemoryContexts, the memory-bounding of HashAgg (spilling HashAgg to disk), and the adaptive hashjoin fallback to nested hashloop join. The talk will also include an interactive session to solicit feedback from users on their expectations and experiences with work_mem and the memory behavior of Postgres.
|
10.5446/52194 (DOI)
|
Welcome to the RustConf 2020 Core Team keynote. I'm Manish from the Rust Core Team, and I'll be giving a quick forward. First a brief note. This talk was recorded before any of the recent events relating to Mozilla. Since then, we've put out this initial response on Twitter as well as a post you can visit on the main Rust blog at blog.rustlang.org. This talk will be given by five members of the Core Team who we'll introduce you to, though he's speaking in the following order. Nico is co-lead of the compiler and language teams. He works at Mozilla. Mark is lead of the release team and is the main maintainer beyond perf.rustlang.org. He studies computer science as an undergraduate at Georgia Tech. Aiden is co-lead of the infrastructure team and works with Rust at Hadean. Ashley has been involved in the crates.io community and infrastructure teams. She works at ApolographQL building Rust and WebAssembly tooling. Nick is a Rust engineer at Pincap and has been involved in the compiler, language, and DevTools teams. So hello. This marks the fifth birthday of Rust, that is this year 2020 marks the fifth birthday of Rust. And what do I mean by the fifth birthday? I mean that five years ago we announced Rust 1.0 to the world. We basically said Rust is open for business, ready for use, and we're not going to break your software anymore. If you were using Rust before 1.0, you know that we broke your software a lot. We don't speak of those days. Actually, that's not true. We talk about those days all the time, at least I do. But anyway, the point is 1.0 release five years ago, very exciting. And in the time sense, we've seen a lot of people using Rust. More and more people, it seems like, using Rust for more and more things. It's kind of more things than I ever imagined. I guess, okay, that's not true either. I can imagine quite a bit of usage. But more than I dare hope for, for sure. And it's been really exciting. And we figured that now with Rust kind of growing in use, this was a good time to step back, reflect on the last five years and the values that took Rust from where it is, from where it was, to where it is now, and hopefully that will see us into the future as well. And it turns out that when we were coming up with the current Rust slogan, we actually put quite a lot of thought into it. And so what was a slogan that really captured what Rust was about? And this is the slogan we wound up with. And I've highlighted two words. It's empowering everyone. Because I think those are the two crucial words that have been the through line for Rust, from its first inception to its current incarnation. Let me explain. Because I think we've come to understand those words even better over time. Initially, we did think about empowering because we thought about empowering C++ programmers. We knew that we had existing systems programming experts, like the ones who were working in Lozilla, that were maintaining million line code bases and they were struggling. It's a lot of work. There's a lot of bugs to check on, a lot of subtle bugs with segmentation faults and irreproducible problems. And we knew that if we wanted to take those code bases to the next level, like if we wanted to extend Firefox with cool parallel programming features and so on, doing it in C++ was probably beyond our resources. It was just more and then we could muster. But if we could find a new language that would solve automatically a lot of those problems and let us focus on the things we actually wanted to do, then we could do it. And that's what Rust was all about. And of course, you do see a lot of usage of Rust in Firefox today and ever more every year and that's super exciting. But along the way, this initial goal of empowering C++ programmers turned out to have a side effect that we didn't anticipate. And I first kind of learned about it watching this talk from Yehuda Cats in 2014 talking to Ruby programmers. And what Yehuda was saying here was, hey, Rust is a new language. A lot of you have problems that were well suited to systems programming. You need to make this bit of code more efficient. And before, it may not have made sense to use a C++ extension because the maintenance hassle wasn't worth it. But Rust changes the calculus and opens the door. It makes it a lot more accessible. And so that began this really cool blend that I think we've seen to this day of programmers from a bunch of different backgrounds coming together to work in the same language and in the same community and bringing these different experiences. And I think that has been really great for Rust. And one of the key things that you can see even in this first example is that while we often think of programming communities as different, we think of C++ programmers and Ruby programmers, that's already a bit silly, right? Because many of us love to use more than one language. It's not distinct things. But also, the problems and the experiences from one community can really help the other. They're not separated at all. And we saw that with Rust, right? The goal here, we were targeting the problems of systems programmers. But it turned out we were opening doors and solving problems for other communities too. This is some slides from Julia Evans 2016 keynote, which I love. And we were, by making improbable programs possible, which she says, now we were empowering lots of people from lots of different communities. And that first time, I think, it kind of happened by accident, right? But after that, we started to take this approach more deliberately, looking for problems that affect one group and trying to solve them in a way that benefits the whole community. Let me give you a few examples. First one is going to be cargo and crates.io. So I have a confession. When I first started working on Rust around 2011 or so, people were talking about, you know, adding a build system to wrap the Rust compiler. And I kind of thought, I don't know if we should try to do this. It seems too hard. We're probably just going to end up recreating may files, but worse. It seems like not a good use of our time. Now why did I think that? Well, I thought may files were good enough, but I hadn't used systems like RubyGems and I hadn't used NPM. I didn't really know what software reuse could really feel like. Luckily, there were a lot of people who had and who did, and luckily we did build cargo. And we also built crates.io, right? This repository that now lets you upload your packages and download and reuse. And now I totally get it. Obviously, it's a really powerful tool to be able to just add one line of code and use somebody's package. And a key part of Rust's kind of empowerment story, a key part of making Rust programming productive. By the way, I'd like to give a shout out to the crates.io team that tirelessly manages this website. And Sean, one of the team leads, will be talking later today, although not here about crates.io. I'm sure it'll be really cool. It's closing keynote. Check it out. Anyway. So, yeah, and in fact, I'm really, you know, it's great that we did because if you look at messages about what people love about Rust or about their first experience with Rust or something like that, what you're going to notice is time after time again, cargo comes up every time. I mean, most times. It's really cool. And I mean, look, there's some quotes I scraped off of Reddit and so forth. But this person, they hate everything about Rust. They hate the module system. They hate how verbose it is. They think it's ugly. They wish they were JavaScript. They wish they were C++. I don't know. But they love cargo. Right? I think that kind of tells you everything you need to know. Another page I thought was interesting was this one. This talks about if you're coming to Rust from different backgrounds, what to expect. And it says, when you're coming from C++, it may take you some time to get used to the type system, to get used to lifetimes. But in the meantime, you can enjoy using cargo. Right? And you'll get hooked on that, and then you'll figure out the rest. And I think that kind of tells you what you need to know. That a lot of times, you might be thinking like, there's existing systems programmers. They're the experienced folks. They should be the ones leading the way. That's not always true. Sometimes there's people from other communities who have solved this problem. And it's often those experienced people who were kind of trying to tell you that's not a problem we're solving that benefit the most from the solution. And I can say personally that I had benefited. Hacking on Rust, we used to have this make file to build Rust. It was an interesting make file. I learned a lot about make from this make file. I like how it starts with reading adventure. You know, it had lines like this one. Let me just pull that up a little closer in case you can't read it. There's six distinct dollar signs there. Each one is like one level of escaping or something. Luckily I've forgotten all how this worked. We have replaced this since with cargo. It's a much better experience. And you know, craze and cargo and craze are great tools. They're one tool of a large family of tools. And I think that's a key part of the Rust experience too that we really tried to keep our tooling and our accessible and fill the needs that people have to be productive and enjoy using Rust. Let's look at another example, error messages. So Rust error messages around the one point of time period, they were functional. They usually told you the line of code that caused your problem. They tried to tell you why, but they didn't necessarily do a very good job. I think most of us, you've got experience with Rust, you learn not to read the error message and just to jump to the line of code and look around and figure out what was going on. And that all changed around 2016 when we started doing this big push on new error messages led by Jonathan Turner. The idea here was that we want to, well, this was part of, I should say, a bigger push to improve new users' experience with Rust. And one of the key things that new users like to do when they want to learn Rust is run the compiler on some code and they're going to get errors. And so if we can make those errors better and help them understand the problem, they're more likely to stick with Rust. That was the idea. And that's what led us to these awesome error messages we have today. It's also kind of what, you know, we were taking some inspiration from Elm and from some other packages in Elm, so I don't know about Elm. Whereas now it's kind of this friendly competition where people are really fighting to have the best error messages and copying ideas from one another. And I think that's awesome, right? I love that. And I'm glad that Rust is in that, is part of that movement. And in these error messages, we've really focused on bringing your code to the front and actually explaining the problem, giving you suggestions on how to fix it. And it was not, I should say, just one person's work. It was a lot of people who participated in this. It was a real community effort. We had to go through every one of the old error messages and there were hundreds and hundreds and convert them to the new format. And as part of that, we brought in a bunch of new people, including Esteban, who has since kind of taken up the lead of the diagnostics effort in the compiler. And I love this tweet where he talks about his goal of, can we make it that you don't even need the Rust book. Just get all the information you need right from the error message. I don't know if we'll ever get there, but I think we're getting closer and closer. Right? It's really cool. Esteban also is talking later in RustConf. Check that out. Now all of this stuff was focused on helping new users learn Rust, improving the experience when people first start with Rust. But is it only new users that benefit from good error messages? No. Everybody benefits. Right? Even people who have been using Rust a long time. And in fact, we're all just kind of accustomed now to actually getting useful information out of our errors. So it's been something that really worked for the whole community, even if it came out of focusing on the needs of new users. Now one last example is the code of conduct. I think most of you know, which would come as no surprise, that the internet can be a hostile place. There are a bunch of jerks. They like to say things. They're not often that constructive. And that kind of gave rise to a movement amongst software packages and open source communities and just communities in general to establish codes of conduct. And Rust is one of those projects. We've had a code of conduct from the very beginning. Our code of conduct says you shall not use harassing language, demeaning language, and so forth. But it also says things like you should appreciate that there are design tradeoffs, that not every design has a single best answer. And you can have this agreement about which one is the best. And you know, the goal of a code of conduct, part of it is to make the language a more inclusive space, to help get more people in. And you might think then that that's really what the audience is for. People who aren't currently part of the community, trying to make their experience more pleasant when they first join. That's true. But interestingly, if you look at what motivated Graydon, he was a founder of Rust, an initial person who started the project and who insisted on the code of conduct at that time, why did he do it? It's not that it wasn't for other people, but it was also for himself, right? Because he had been a part of many different open source projects in the past, and language projects. And he found that, you know, if he was going to work in this space, he wanted to work with people who were respecting these precepts. And so I think it's the good case of focusing on the needs of sort of newcomers to the community who might be turned off by that kind of style of discourse that had been happening, actually makes the life for everyone who's participating much, much better. So I do say when we talk about code of conduct, another key part of having a code of conduct is having an enforcement mechanism to implies moderation. And we all owe a big thank you to the moderation team that does a very difficult job. If you'd like to appreciate how difficult it is, I can't recommend enough this comment by Bern Sushi, the URLs there, it's unreaded. I pulled out one particularly interesting quote, but it's a much longer comment and all of it's worth reading about how hard it can be to have to try to sit, stand in a conversation and draw the line of what is acceptable and what is not. So thank you to them. Now I've given three examples, but there's so many more, right? Once you start looking for empowerment as this theme, and especially this idea of kind of newcomers or learning from newcomers to benefit the whole community and learning from different communities to benefit one another and so forth, you start to see it everywhere. It goes from the language design, the things like the ergonomics initiative to the way that we run our project, right? Focusing on efforts like the Rusty Dev Guide or having RFCs and teams, so the tools that we use to run the project like GitHub and so forth. They're all oriented at empowering people to participate, empowering people to use Rust and to learn. So I think it's really this central story of Rust itself, but I'd like to hand it over now to Mark, who's going to give us another look at the history of Rust, this time from a more numerical perspective. The key element in making Rust the language it is is the people who participate in our community. The Rust team also tries to automate what we can to keep our human population happy. But we currently have 305 team members. There are many more people discussing and contributing to Rust on GitHub in our forums. We are also seeing hundreds of people joining conversations for the first time just this year. Not only are people joining conversations on GitHub and the users forum, but we are also seeing much more issue traffic. Over 2,000 issues are being filed each month. We can also see that ever since we've known we have been steadily accumulating issues across the organization. More issues are being opened and closed. This is one of the reasons the language compiler and library teams are looking at restructuring their interface to the wider community. We don't yet know if the ideas we've identified will work out, but experiences are ongoing. GitHub contributions are great, but aren't the only way to participate in the community. We've had more people publish crates in 2020 than commit to our GitHub organization. What we are seeing is not only thousands of publishers, but we are also seeing many new versions being published. Of those versions, over time we are increasingly moving towards stability. Around 20% of the versions published in the last six months are 1.0 or greater. Having taken a look at all of these statistics, it is clear that Rust is growing rapidly. And growth news change. Although things can feel permanent when you join a project, the people that you are working with, the structure that you are working within, or last forever, that's not the case. People will leave, join new projects, new people will join, and this is okay. In fact, evolution can be a good thing. So it means the project is adapting to growth and changes. And as an example of this, the teams within Rust didn't exist when the project first started. It was as a response to the growth and the emerging responsibilities. And RFCs are another example of an evolution in Rust. But change can be disruptive. And you may have encountered cynical takes on this, such as talking about the good old days or referencing something called the eternal September, which is an event where a large influx of new users into a community overwhelmed that community's ability to end up those users and introduce them to the culture and norms. And this is important for us because it's a consensus-based project. There's no benevolent dictator for life, as you may have seen in other projects. We have to deal with this by coming to an agreement around what the roots are around roots forward. And the way that we can overcome disruption here is with a set of shared values and alignments around the things that we believe are important for the project. You know that values aren't optional, not having a set of values is in itself a value choice. And in order to choose a set of values, we need to consider what's caught with the culture of Rust. What do we want to preserve about it? And the answer is people. The Rust project focuses its values on people. And I'm going to go through three different concrete manifestations of this, where you can see this hopefully more clearly. So the first is in eliminating things that don't need to be taught. So one of the core promises around Rust, which Nico touched on earlier, was around making it feasible to write applications of a particular kind. And part of this is about reliability, one of the key promises of Rust, that it just eliminates certain classes of bugs at compile time. You don't have to think about it. It doesn't need to be a hard thing to do. Another example of this is the module system. And this is something that the Rust project iterated on. So the language team, as part of a project to identify the things that people found difficult to understand or work with, observed that the module system that was released in Rust 1.0 was confusing. And so actually created a set of RFCs to discuss and design a solution for this, which was eventually landed and is now available in Rust 2018. So if you download Rust today, you're actually benefiting from this redesign and this desire to eliminate confusing or difficult things. The next example is educating on things that are hard. So we recognize that we're not going to be able to get rid of everything that's difficult. But we can try and help people when they encounter them. So one example is Eris, which was covered earlier on by Nico, which has had a lot of effort put into it. And this user has, in years of using Rust, encountered an unhelpful error message for the first time. And as it happens, this was improved shortly afterwards. And it kind of demonstrates the success of Eris that this is the first one in those years. The other example of this education is around the focus that Rust puts on docs. So Rust has a standard for creating documentation. Rust doc that's distributed with the compiler. It has high quality documentation for the standard library. And it has automatically built documentation for every version of every crate that gets uploaded to the package registry. And this is an incredibly impactful thing when you're trying to start on a project, trying to figure out how to use it. You know that there will be some documentation available for you to use. The final example is providing access to spaces and power within the project. So the most obvious example of this is in the RFC's process, which is the way in which decisions are made in the Rust project to post a proposal in a public forum, let anyone comment on it, discuss it, and then have the team come to it, one of the Rust teams come to a decision on it. And it's a very open process where anyone can have their say. Another example of this is the code of conduct, which Nico covered earlier as being a way to provide spaces within the project for people to participate. Finally, the team structure. So the creation of teams is something that gives people power within the project to have their say to contribute. And this is something that is we've been talking about for a while. So this is a slide from the 2018 Rust team keynote and talks about making a space for people to step into to help out in the project. And ultimately, the reason we center our values around people is because technologies buy people for people. And so focusing your values on people and caring about people actually makes great technology. Thanks, Aiden. So we've just heard that great technology is built both for and by people. And that is a critical value involved in building great technology. But I do think that the term great technology can be somewhat kind of ambiguous, or at least vague, maybe to the point of meaninglessness. And so I'd really like to bring definition to that term by centering it around this concept of impact. And so I think one can say that great technology is technology that has impact. And I think that begs the question then of what does it mean for a programming language to have impact? And this can feel like a very philosophical question, which maybe coming from me is unsurprising to many. But I actually think this is a question that Rust has been asking itself since very, very early on. And I think it has centered a lot of the conversations and questions and decisions that the project has made over time. And hopefully you've seen a fair amount of that from what my other presenters have talked to you about today. But I think one of the moments where we became most explicit about the type of impact we, as a programming language, Rust wanted to have was during the release of the 2018 edition, which certainly had its bumps. It was a really, I think, emotional, but critical time for the project. But one of the biggest portions of it was, at least to me, releasing a new website for the project and also a new slogan. And so the slogan that after some hard conversations with the community that we eventually came up with was this, a programming language empowering everyone to build reliable and efficient software. And so this is not something that I had originally chosen, though I had originally included the language of empowering everyone. And that's the part that I really think is by far the most interesting, but also the most critical as we think about the type of impact that Rust wants to have. So what do we mean when we say empowering and empowering everyone? So Rust wants to be empowering. I think this is really central to a lot of the reasons why people like working on Rust. I also think that the concept of empowerment is central to a lot of Rust's ability to be incredibly successful. But I think the term empowering hides within it a lot of interesting and really poignant complexity. So it's not a talk, right? If we don't look something up in the dictionary, and we're going to do that for a while in this section. But the term empower here is a verb meaning to give someone the authority or power to do something. I don't think that that should blow anyone's mind, but something that I think is really critical in here is that at the center of the word empowering is the word power. And power is a really, really interesting and really important concept. So here what I've done is kind of an inverse lookup, but I've looked up the term politics, which now people might be going, Ashley, what are you doing? This is a talk about programming languages. And now you're bringing politics in it. And that will be that person. But what I want to say is by bringing up our slogan that we seek to empower everyone, I've actually already, we've already brought politics into it. Because what is politics? So the first half of this definition says activity is associated with governance of a country. And so that's probably what most people think of the term politics when they originally think of it. But I think this second portion is the part that is significantly more interesting. So here it says, especially the debate or conflict among individuals or parties having or hoping to achieve power. And if you recall, we have this key word empowering inside of our slogan. So the idea of politics, which I would just define as systems of power, right? Because rust seeks to be empowering, we are already seeking to be political, right? Rust wants to be political. And this may be a controversial thing to say. I could imagine that there are people who are frustrated by this. But at the end of the day, I think it is by far one of the most core and foundational concepts in rust. To the point where it's not just that we want to be political, it's that we always have been political. And that's something that we think is really important. Now this also isn't terribly surprising, right? It's true that rust is trying to be explicitly political, but all technology is political because technology fundamentally gives certain sets of people power and the ability to achieve all sorts of things the way we've seen software completely changing our society and our world, right? So technology is political whether it wants to be or not. And I think it's great that rust really wants to explicitly be political. Howard Zinn and I think many other people have said something to this effect have kind of said, you know, you can't stand still on a moving train, right? Technology is political. So we can choose to say, oh, rust, it's just a programming language. It can't be political. Or we can embrace that and try and do our best to be as deliberate and focused on our political aspects as we possibly can. So what I think, what I think about our slogan of rust wanting to empower everyone, to empower people is that our fundamental relation to power is that we're really interested in redistributing it, which is an incredibly, you know, powerful idea, no pun intended, right? So what does this mean? So we've talked about how the rust project focuses its values on people. And I don't think that many people would think that that is terribly controversial. But I think it starts getting really interesting when we acknowledge that what we want to help everyone and we want to empower everyone. At some point, when we think about things, we need to choose certain people. And so while in our previous conversations, we've referenced the idea that there's false dichotomy is that like a C++ developer and a Ruby developer still aren't really all that different. And I genuinely believe that to be true. I think that that can be true. Well, at the same time, acknowledging that as we seek to be a project with impact, we cannot be everything to everyone. And so we have to make choices about the types of people we focus and center on in the project. So in the Rust 2018 keynote, I made this Captain Planet slide where I talked about all the different audiences that we see in the Rust community. We have like the rust, rust, die hards that have always been rust, always forever. We've got folks from academia. We've got folks coming from lower level languages like C++, assembly. We've got some higher level scripting language people, JavaScript, Python, Ruby. We've got people who have never programmed before who come to Rust wanting to be programmers. So brand new folks. And this is part of the like really vibrant diversity that I think makes Rust really, really amazing. But we cannot, all these different audiences have very different needs. They are very, very different types of people. And we want to embrace all of them at the same time, but focusing on all of their needs equally at all times is not something that any project is capable of doing. And it makes it incredibly hard to focus. And so we've heard this slogan, right, like a rising tide lifts all ships. And what's interesting about this, I looked up this Wikipedia page for this slogan, y'all should check it out because it's not nearly maybe what I think the Rust ideology has kind of adopted this slogan to mean. But in here, there's like a hidden judgment, right? I think, and the judgment is something that I think we can all agree on, which is, so we say a rising tide lifts all ships. The judgment here is that a rising tide should lift all ships. And that's something that we'd really want to be the case. Now this is an aspirational slogan, right? And then if we really think about it, we say a rising tide should lift all ships, you know, it often does it. And that's something that we don't think is acceptable, right? Like we think that a rising tide should lift all ships. So as I said before, focusing on the needs of everyone all at once can be incredibly difficult. And I think for a project that's largely built of volunteers, our focus and our attention is by far our greatest resource. And it's certainly the research from which we can derive the absolute most value. And so the real question here is as we focus ourselves on people as a project that values like the humanity of technology, like what should we focus on? Where should our focus be? And so what we've settled on is we really genuinely believe that we should center on people who have the most need. So these are people who, you know, have the least amount of resources, the least amount of ability to help themselves. We think that those are the folks who need the most help. And as a result, are the ones who deserve our focus. And the awesome thing about this that we've heard from Nico is that when we focus and center on the people with the most need, the things that we're able to produce actually end up helping everybody sometimes in really surprising ways. But at the core of this, this is our political choice that we want to center the folks for the most need. So as we look at this slogan, a language empowering everyone to build reliable and efficient software, I don't want you to think that we've thrown this everyone out. However, I think that this is true, but there's a core of it that is true or which is we are a language empowering everyone, but especially folks who didn't think that systems programming was for them. There was a big argument about whether or not we should include the term systems programming in our slogan. And we ended up taking it out largely because it's been such a gatekeeping term. It's kept people away because so many people self identify as someone who could never be a systems programmer. And while it's not in our slogan because that sentiment around the gatekeeper-ness of the idea of systems programming is 100% true, at the core we still want to take that concept of systems programming and break it free from that sense because we want it to become something that is accessible to everyone. And in that sense, we're trying to redistribute the power of systems programming. And that redistribution is our impact. And it is fundamentally political whether or not you find that distasteful. And it's something that I think the Russ project is incredibly excited about. And so we've done a lot of really interesting things. There's things where we've been able to see this today. So this is Rebecca. Rebecca is one of the speakers today here at RussConf. And she had tweeted that it kind of owns that more than 50% of the RussConf speakers are trans this year. And I got to say, that is so friggin' cool. I have never been to a conference where that is true. And I am really proud that that conference that I get to experience that at is RussConf. And I also don't think it's a coincidence. I do not think it's by mistake. And it's really, really exciting to me. But simply because we have wins like this, doesn't mean that we can just fly the mission accomplished flag. We have done some amazing things. But there's so much more that the Russ project wants to do. If we really want to achieve that ambition of redistributing the power of systems programming, we've got a lot of work to do. And if there's anything you hear from me today, it's that Russ has some pretty amazing ambitions. And these ambitions are what motivate me to work on Russ. And that's fucking awesome. But our ambitions right now are currently greater than our capacity. We've got dreams of doing some pretty big things, but we want to do them right. And doing them right means that we need to create more capacity for ourselves to be able to achieve those goals. I was talking with my friend Jen Schiffer recently, and we were talking about these concepts of accountability and aspiration. I think I've covered pretty seriously that we have some pretty huge aspirations here at the Russ project. But what we really need now is accountability. And I think that we know what we want to do, and we now need to build the organization that's going to get us there. And to do that, we're going to need all of your help. So it's time for us to grow. And everybody who's in this audience today who's watching this at home, we really need you to get involved. And so if these ambitions and these aspirations that I've shared about haven't scared you, have excited you, describe a future that you want, we need you to come help us get that done. And here's Nick, who's going to tell you a little bit more about how to do that. So we've got to grow to achieve the things we want to achieve. But we've got to grow sustainably. You've probably all seen that Rust for the fifth year running was the most loved programming language, according to the Stack Overflow Survey. I want to look back in another five years and see us getting that spot for the 10th year running. And I want to look back and see Rust getting better and better. I don't want to look back and think, well, that was a project with amazing potential. And it's such a shame that it burnt itself out. So how are we going to achieve that? There's a well-known proverb in New Zealand. He a ha temen nui o te o he tangata he tangata he tangata. What is the most important thing in the world? The people, the people, the people. This holds, I think, for every project and every organisation, very much including Rust. We have to look after our people in order to have a sustainable project. So how can we look after each other? Let's start by having a look at some of the concrete things we're going to be doing over the next year or so with this goal in mind. The first thing I want to talk about is the 2021 edition. And for some of you who were part of the 2018 edition, then this might seem like the absolute opposite of looking after each other. If you look at RFC 2966 and the way that that lays out how we'd like to run the 2021 edition, the idea is that we're going to have editions on a regular cadence, much like our six-weekly releases that ride their trains. The edition are going to ride their own trains. It's just going to take three years to come. So that means that there should be no high-pressure deadlines to get stuff finished for the edition. And hopefully that will make it a much less stressful experience. We are intending to continue to improve our governance and organizational structures. And now's a good time. We'd like to say thank you to the governance working group. They've been active for the last year roughly, and they've done some really great work looking into the various governance structures that we have and ways that we could make them better. We're also actively exploring starting a Rust Foundation. The primary motivation for that is financial. And that's an important part of looking after each other in the capitalist societies in which we live. We talked about RFCs earlier in this talk, and RFCs are a really important way that we keep the evolution of Rust open, but they can also be hard work. We're looking at a whole bunch of ways that we can improve the RFC process. The compiler and language teams are experimenting with some of those changes already. We are trying to make what the core team does more transparent. Now some of the work that we do necessarily has to remain private, but a lot of it doesn't. And we have a large part of our weekly meetings of public. Pietro announced back in July that we're experimenting other way to open up the public part of the core team agenda so that everyone can follow. I also want to go over some of the concrete ways we hope to continue to empower our users. On the language side of things, we're going to continue to focus on ergonomics and usability. And I think the development of Async-A-Waite is a great example of that. Async programming is difficult, but Async-A-Waite certainly makes that much easier. And as we go forward, hopefully we're going to make it easier still and to have a more complete solution. Cargo Clippy is a great tool. It's really helpful for beginners to learn what idiomatic rust looks like. And it can be useful even for experienced users to avoid some of the possible foot guns. Cargo Clippy is getting a fixed feature so that it can actually fix your code as well as just tell you what's wrong. IDE support is really important to rust. It's something that is asked for for better support every year in our surveys. And I think, again, as a learning tool and as a tool for improving the effectiveness of programmers, IDE support is a really empowering technology. Rust Analyzer is an alternate IDE backend that is not based on the Rust compiler. And it's because it's purpose-built, it's faster and more responsive than the RLS. And RFC 2.9.12 lays out the path for making Rust Analyzer the default for Rust IDE support. And Rust Analyzer is available right now on Rustup and the VS Code plugin supports both the RLS and Rust Analyzer. We want to empower users to become contributors to the Rust project. And the Rust Forge and the Rust C Dev Guide are two great resources that have been really developed over the last year or so for exactly that purpose. The infra team have also done great work to make our infrastructure easier to contribute to. It's hard to think of anything more important for building a sustainable community than safe, inclusive spaces. So I'd like to say thank you to the mod team at this point because they're a big part of that and they do a really difficult job. Thank you. As far as maintaining our inclusive spaces, I think that the Rust project started really strong. We've had moderation, code of conduct, and an official mod team from very early on. As the community has got bigger, maintaining these spaces has become harder. And a lot of the work that the core team has been doing but we can't really talk about in public has been dealing with a lot of these issues. And I think that the work we've done over the last year should help us much more effectively to maintain the inclusive spaces we have going forward. We believe that having a diverse community is really important. And to be honest, this is somewhere where we haven't done as well as we would have liked. There have been some good bits, some highlights, Rust Bridge, increasing Rust Reach just to name a couple. But overall, this is not somewhere I think that we've excelled. We need to do better, and we will. Over the coming year and into the future, we're going to make building a diverse community one of the highest priorities for the Rust project. So that all reflects our strategy. But culture eats strategy for breakfast, apparently. Let's have a look at how we'd like the culture of the Rust community to look. First of all, please look after yourselves. It's tough times around the world for lots of different ways for different people. Don't let yourself get burnt out, but added stress makes burning out so much easier. If it's helpful for you to do less work, please do do less work. Be kind to yourself. We don't need heroes to be a sustainable project, but we do need people who are going to be around for the long term. And please look after each other. Again when times are tough, we just need to treat each other with a little bit more kindness than usual. A really important part of that is mentoring and community building. It's really essential work to do if we're going to have a sustainable project and a sustainable community. And I think Rust really does have one of the best tech communities around, but we can always do better and as it grows we've got to keep doing this work in order to keep it good and to keep it getting better. I just want to shout out to the awesome Rust Mentors website which was put together by Jane Lusby. This is a great website which simply connects mentors with mentees to help facilitate mentoring. We talk about empathy in the Rust project, probably more than most tech projects do, but I think it's really important. Empathy is how you understand the needs of another group and being empathic to our users is how we've developed the language into being a good language frankly. And being empathic towards each other is really important for having productive and enjoyable discussion rather than horrible, stressful arguments online. I think to prioritise empathy is a great ask to finish on. That's all from us. I hope you recognise the community you're part of in this talk and hopefully there's something to think about about how we can keep it awesome as we grow and as the language matures. We'd like to finish by saying some thank yous. We'd like to thank the RustCon for organisers. Maria and Skylight have basically made everything work including the transition to an online conference. Now, her is the chair of the programme committee and has also done a lot of the organisational work. And everyone on the programme committee have done a great job putting together a really good programme that we hope everyone enjoys. And of course our sponsors for coughing up the cold hard cash needed to make something like this happen. That's all. I hope you enjoy the rest of the conference.
|
Opening Keynote
|
10.5446/52195 (DOI)
|
Hey folks, my name is Rebecca Turner. I use she, her pronouns and I'm going to be talking about Rust for non-systems programmers. Don't worry about taking notes. All the code I'm going to be showing you as well as these slides and a rough transcript are available online. And I'm going to wait a little bit so people can copy down these links. So I'm a non-systems programmer. Before learning Rust, I mostly wrote Python and now Rust is pretty much my favorite language. But if you looked at the Rustlang.org website before 2019, that might not make a lot of sense to you. Here's the Rustlang.org website at the end of 2018, right before they rolled out the new site. The headline emphasizes systems programming, speed and memory safety. All things I don't directly care about that much. And here's the new website today in mid 2020. Now Rust is about empowering everyone to build reliable and efficient software and the website focuses on reliability and productivity. But a lot of the documentation is lagged behind and still assumes that new Rust programmers already know C++ or something similar. That made it really hard for me to learn Rust. I've never really understood memory management, so a lot of the documentation was pretty inaccessible for me. I want to talk about how we can use Rust as non-systems programmers without getting too bogged down into the details of optimization and memory management. Before we start writing code, let's take a quick look at some of the things Rust makes strikingly easy. And don't worry if I go through these quickly, we'll come back to these features soon. Rust can do command line argument parsing generated from a type definition with automatic typo correction while generating tab completion scripts and man pages at compile time. Rust can give great error reports for complex errors while automatically deserializing JSON to a custom type. And Rust can output fancy test diffs with a one line import that integrates with the default test framework. Rust can do a whole lot more too, but I don't want to just list random Rust features for 30 minutes. When I was learning Rust, a process that had three or four false starts since about 2016, I kept getting halfway through writing a program before I'd get stuck on a compiler error I couldn't figure out. So we're going to write a non-trivial Rust program together and see how we can solve a lot of common problems in a rusty way without worrying about the finer details that I have a hard time understanding. There's a lot of Rust features and tools that aren't important to me as a Python programmer, and I'm going to pretty much skip over those entirely. We're not going to optimize anything because the totally naive program we're going to write takes a tenth of a second to run, and almost all that time is spent waiting on some network requests. We're not going to talk about macros or a lot of the fancy type system features Rust has or pointers. I'm not even going to say the words heap or stack or allocate. If it wouldn't matter in Python or JavaScript or Ruby, it wouldn't matter here. I have ADHD, and it varies from person to person, but one area I really struggle with is working memory, which is roughly how much information you can hold in your head at once. And as an engineer, that means that I can't hold much of the program concept in my mind while I work. It's really important that I have a powerful compiler, linters, and tests because otherwise I have no way of knowing that the program is correct. Take annotations and auto completion aren't optional niceties for me. It's essential that my tools tell me which operations are supported on which variables because otherwise I have to look them up nearly every time. Rust really shines in all these areas. I work with my compiler to check my work, and it helps me feel a lot more confident that my programs do what I think they do. Before we start looking at code, I want to point out a few of the tools that make writing Rust easy and fun. We have Rust doc which compiles.com and is written in markdown to web pages, complete with search links, and more. We also have MD book for writing longer form narrative style documentation. MD book was created to write the Rust book, the main source of Rust documentation. We have two very good language servers for auto completion, definition jumping, quick fixes, and more. RLS is distributed with Rust itself, and Rust Analyzer is a community project. We also have Cargo, package manager and build system integrating with crates.io package repository that handles everything from dependency resolution to building documentation, running tests, and benchmarking. Here's the generated documentation for the rand crate which you can find at docs.rf. When we open it up, we can see the overview they wrote, and we can even search the crates items with keyboard shortcuts. If we click on the thread rng function, we get to this definition. If we click on that return type there, we can check out the documentation for thread rng. We see a description, and if we scroll down a bit, we can see the traits that rng implements, and we'll come back to what a trait is soon. RNG core looks interesting, so let's find out about that one. We see a description at first, and then if we scroll down, we can see the required methods and their documentation. Having a uniform style and interface for documentation is really helpful when exploring a library's API and when jumping between multiple libraries. Here's a pretty simple REST program just to show off a bit of syntax. The use statement imports names from libraries. colon colon is used as a path separator and namespacing operator. Next we define a function with the fn keyword. The function named main is the entry point. We call the var function in the end module and assign the value it returns to user. Rust figures out the type for us, and var is a result, or returns a result, so we have to unwrap it, which will crash if there's an error. We'll talk about what all that means in a minute. Next we have an if statement, which has braces but no parentheses. Note that we're comparing strings with equals equals, so we have operator overloading, and then we have this print line macro. The exclamation mark at the end of the name means it's a macro, and the string literal there is actually turned into a series of formatting instructions at compile time, and the compiler checks that we have enough arguments and that they're the right type. We can run cargo build to compile the program, and then we can run it, and it does what we expect. Although if the user environment variable is empty, it might be a bit confusing, and if user contains invalid UTF-8, it'll crash the whole program. So this type that var and var returns is result, which is an annum, which means it's a type that can be one of a number of different things. It's also generic type, so we can pick any two types, t and e, and use a result type, which can either be an okay variant containing a t value or an error variant containing an e value. One way we can deal with that error is by matching on it, which is a bit like an is-instance check. Here we'll just handle an error by printing a simple message. So if we have an okay value, we take that and run our logic from before, and we have an error value. We throw it away using the underscore as a placeholder or wildcard, and just print that little message. So now when we run our program within valid data, we print an error message instead of crashing. We'll talk about some other ways to handle errors as we go, but for the definitive rundown, check out Jane Lundspey's talk, Error Handling Isn't All About Errors. But this talk is about Rust value as a practical programming language, which means doing a lot more than writing hello world. So let's write a program in Rust and explore some of the ways the language helps us out. I have this receipt printer hooked up to my computer, and it's super fun to play with. There's no ink, so paper is incredibly cheap, and they're designed for restaurants and retail, so they're incredibly durable. I always forget to check the weather in the morning, so I want to write a program I can set to run before I wake up that tells me the weather and how it'll feel compared to the previous day. Weather APIs come and go, but right now, open weather is providing decent data for free, even if the default units are Kelvin's. Here's a simple call of their API in Python. First we load the API key from a JSON file, then we make a request, and finally we print out the response text. When we run it, we get a minified JSON blob as output. So let's work on recreating this in Rust. Here's a start at a line-by-line conversion of that program. First we're using the include stir macro, which actually reads a file as UTF-8 at compile time. We'll work on opening files in a bit, but this works well enough for now. Next we're going to use the 30 JSON crate to parse that string into a JSON value, and then we get the API key out of the object as a string. Each time we assert something about the type of value in this object, we need to unwrap it because we might not have value of the type we want, so we need to deal with that somehow. Note that this isn't entirely unique to Rust, though. Our Python program would also crash if API key object wasn't a JSON object, or if it didn't have a key named API key, or if the value of that key wasn't a string, but Rust makes us be explicit about all these assumptions that we're making. That's not necessarily a bad thing. It helps us figure out where errors could happen, but it is awfully verbose and painful to write like this, but we do have a better way. Here we're declaring a struct, which is roughly a class in the sense of a blob of data with named fields and methods, and then we're deriving some traits for it. So what's a trait? A trait is a set of method signatures that specify some interface. Here the from trait lets us convert from one type to another. We can implement a trait for a type with an input block. Note the self keyword there that indicates the input block's type. Rust makes refactoring a lot easier and lets us talk about things like a function that returns the same type as the value it's called on. Rust lets us do a lot of funky things with traits, and particularly traits with generic types like these. Here's the into trait, which is from in the other direction. We can implement into U for all types T as long as U implements from of T. The implementation is pretty simple if you can wrap your head around that. We call the use from method. And that's pretty magical. We only have to implement one of into or from, and we get the other trait for free. So if we have this implementation from string for open weather config, we can use the into method on the string type to convert to an open weather config object. But about that two owned call there, what's the deal? Shouldn't a string literal already be a string without calling another method? Well, in Rust, string literals get baked into the compiled binary directly. Because that data is always sitting at a fixed location in the library, we can't change it without copying it into memory first. Because if we changed it there, it would change it for everyone else using a string literal. So if we want to have a string that belongs to us rather than one referencing some data elsewhere in the program, we have to call the two owned method to convert it, which creates a new string object and copies the data we need into it. Back to our open weather config struct. We don't have to implement every trait by hand like we were doing with into and from. The other option is to use a derive macro, which is a function written in Rust that reads the type's definition as a syntax tree and automatically generates an input block for us. There's usually a few requirements for deriving traits, in particular for traits like debug, clone, and deserialize, we need all the types the struct is composed of, which here is just string to implement the same trait. Debug lets us pretty print the struct's data, clone lets us deeply copy it, and CERDI's deserialize trait lets us deserialize it from JSON or with other CERDI libraries, XML, YAML, TAML, protobufs, and more. Here's what deserializing to a value looks like. Note that we don't need to explicitly construct our open weather config object. That along with parsing the JSON matching up keys to fields and recursively constructing other deserializable values is handled by CERDI and CERDI JSON. Now when we run this, we get some nice pretty printed debug output by default. That's not my actual API key, by the way. Don't worry. The next change I want to make is adding struct opt, which generates a command line interface from a struct definition. Instead of declaring all our arguments as strings and pulling them out of an untyped hash map, we just declare them as struct fields, which means we get things like auto-completion for our command line options, along with bonuses like detecting that option fields aren't mandatory and VEC fields can have multiple values. We get a lot of perks from struct opt, including this great generated help message. And we even get help with typos by default. The next thing I want to do is add some error reporting, so we don't have to unwrap everything and cause panics when something fails. The error crate by Jane Ledzbee gives us the beautifully formatted error messages I showed off at the beginning of the talk and has a lot of other functionality we want to explore here. Now, we can handle errors with the question mark operator, which is a pretty simple but important bit of syntax trigger. The question marks are transformed into roughly these match statements. If we have an OK value, we take that value and use it. Otherwise, we return the error value from the whole function. We just bubble up the error to the caller. It's a little bit like throwing an exception, but we don't quit an arbitrary series of functions. We only go up one layer, and the type system doesn't let us ignore it. Using the question mark operator again, we're going to use the wrap error methods from error's wrap error trait to more accurately describe what went wrong. We just write a bit about what we were doing that might have caused an error, and then that string will get displayed if the error report is printed. It's a pretty simple step provided you do it from the start, and it makes debugging a lot easier. Here we can try to use a non-existent file or an invalid one as our config, and we can see the error messages we get. These are pretty simple on their own, but they're especially useful when we have a bunch of layers of error context to figure out what we did wrong. Unlike exceptions in a lot of languages, we don't just get an enormous unreadable stack trace by default. Now we're going to use the request library to make a simple call to the OpenWeather API. We create an HTTP client object called the get method with the endpoint URL, add some query parameters, and send the request off. We can see when we print the response object, we get all the fields we might expect, headers, a status code, and so on. And we can also print the response text, which is this big minified JSON blob. We're going to deserialize that too, but first let's clean up our interface to the OpenWeather API. So let's unify our config file with the API client. Instead of passing an API key into every function call, we can keep it in the same struct that holds the request client. And because the client has a default value, we can tell SerD to use that instead of expecting it in our config file. Now we can just read our config object from the same JSON file we were using before without even a constructor method. Now to make our API a bit cleaner, let's start implementing methods. This gives us something that looks a lot like the classes we may have used in other languages. And although Rust doesn't have inheritance or subtyping, generic functions and crates can get us pretty close. An input block lets us put methods on types. Like Python, Rust doesn't have an implicit this object you can reference. You need to write it explicitly as self. We also have angled brackets after the function to indicate that the function is generic. Here we have one generic parameter named response, and the colon indicates a trait bound, which means the response needs to be a type with an implementation of deserialize owned, which is exactly what derived deserialize gives us. Essentially we've copied a type parameter from the SerD JSON from reader function so that we can deserialize to any type we define. We can define structs for the API responses. These are pretty much copied from the open weather API docs. And then we can define a helper method to make that request directly. Note that we don't need to annotate the generic types for the self.get call, although we can if we want. The compiler is smart enough to figure out what the type parameter needs to be from the return type of self.get on its own. And then after we can use the new method in our main function to get the forecast data as a richly typed struct. One thing I want from my forecast is to tell me if today is going to be warmer or colder than yesterday. So I'll create a temp difference in them, and then a helper method to get the appropriate temp difference for two floats. Use that constructor function, which takes two floats, calculates their difference, and matches them to the correct temp difference variant. We're also adding conditional statements to the match patterns, which helps make it a bit clearer that we're determining which range the delta is in. I'm really bad at arithmetic stuff, so I want to write a few tests to make sure I got the subtraction order and everything right. First we have this config test attribute, which means the entire test module is conditionally compiled so our tests don't get lumped into our other builds. We have to import the functions and values from the parent module, that is, the rest of the file explicitly. And then a test is just a function annotated with the test attribute. And finally we can write asserts with the assert and assertic macros. We can run our test to make sure that we've written everything correctly. And another little thing I like about Rust, the type system lets me describe and check a lot of my code before it compiles correctly, so I end up writing tests that crash and fail immediately a lot less often than I do in other languages, which is a big boost to my self-esteem. I also want to be able to state various things about a collection of temperatures, like their range and their average. So I'll have a stats struct that will handle that computation, storing the minimum, maximum, average, and number of values. Let's implement the default trade for stats, which gives us a way to construct a default value for a type. It's like Go's concept of a zero value, but Rust doesn't require that every type implement default, because that's not always meaningful. For example, types like file handles don't have a reasonable default. We're picking infinity for the starting minimum value, because every float is less than infinity, and negative infinity for max from the same reason. Note that we have associated constants for even our primitive types. In Rust, primitive types are treated just like any other type, as opposed with Java, where we have to treat reference types and primitive types really differently. Now we can construct a stats object from an iterator of floats. We can start by initializing a mutable return value and some of the iterator's elements. We need to be mutable so that we can change their properties and values as we go on through this computation. Next, we take each value in the iterator and update the return values, minimum and maximum values, if applicable, as well as the element count and running total. And finally, we compute the average value and return. Then we can gather the temperatures for yesterday into a stats object. Note that because we're using lazy iterators, mapping each data point to the temperature it felt like doesn't require writing a whole new array. We just generate the data as we go, and there's no storage overhead. We can do the same thing with our forecast, making sure to limit the forecast to 24-hourly data points. And then we can get a temperature difference between the two days. To finish up, let's print out the data we've gathered. First, I want to print a smiley face for good weather, so I'll check if the average temperature today is between 60 and 80 degrees. Then we'll print the first line, truncating today's average temperature to two decimal places. And then we're going to print the rest of it. There's a bunch to break down here, so let's break it down. First, because print line is a macro, we can do weird things with the syntax, like this keyword argument syntax that's only used for the printing and formatting macros. Next we have a match statement. Rests if else in match statements return a value, so we can use them in line like this for argument values. And then we're going to finish with a smiley face if today's going to be warm and nice weather or a period otherwise. And after printing all that information out, our program is done. So building and running it, we can see what the final output looks like after I fixed several issues with missing or superfluous white space. So now all we have left to do is print it out. We pipe our program's output to LP to print it. And here we go. And there's the same receipt a little bit bigger. And before I go, let me leave you with one last piece of advice. If you're writing a rest program and you're trying to work with references and it's just not working, clone your data. Cloning can fix a lot of annoying problems and it's rarely a performance issue when writing scripts or command line interfaces, particularly when compared with dynamic languages. But if you end up in those circumstances, you can always ask another rest programmer for help. Our community is full of kind and helpful people willing to share a few minutes of their time to help fix errors you don't understand. I certainly couldn't have learned rest on my own. And as long as you're respectful of your peers, we're all glad to help. Everything I talked about is just a tiny portion of what you can do with rest and what rest can do for you. There's so many features and tools I wanted to talk about that I didn't have time for. Things like adding methods to foreign types, type safe numbers, unit conversions, and more. Thanks so much for listening and I hope you do some amazing things with rest. I'm Rebecca Turner and this has been Rest for Non-Systems Programmers. Have a good one.
|
Rust improves on C and C++ by providing memory safety and better concurrency primitives. But for a lot of tools and programs, dynamic languages like Python, Ruby, and Perl are already memory safe and fast enough. “Rust for Non-Systems Programmers” showcases some of the reasons I like Rust as someone who's not a systems programmer, including mutability tracking, development tooling, great documentation, a welcoming community, type-inference/algebraic datatypes, and error handling.
|
10.5446/52196 (DOI)
|
Hello and welcome to RustConf 2020. My name is Samuel Lim. I'm a developer and computational biologist as I perform research in computational biology and I develop software systems to help automate and optimize it. And we'll be looking at fast and safe rust for biology and computational biology. As a note, computational biology and bioinformatics is very broad and the scope of biology is actually even broader. So to keep focus, we'll be looking at computational biology from the angle of RNA sequencing and RNA sequencing analysis, aka RNA-Seq, and we'll see how Rust can play into that. As a crash course into RNA-Seq, what in the world is RNA sequencing anyways? In one way to phrase it, RNA sequencing is one way to sequence the qualities, the presence, and the quantities of RNA in a sample, in your fragments and in your reads, in comparison to reference data like a transcriptome. Now these are quite a few different terms, so we'll define them incrementally. We'll start from RNA and then we'll see why sequencing is important and we'll talk a little bit about alignment along the way. As a working definition of RNA, you've probably heard of DNA first and you've probably heard of it more often. It's the stuff that makes the replication of your entire genetic sequence possible and you wouldn't be wrong to relate RNA to DNA in this sense. Both RNA and DNA are mediums for genetic information, although they do come in slightly different forms, but their purposes can be different. In biology, we have something called the central dogma, which dictates how RNA comes to be and what it can be used for. Originally we start with the DNA in human cells case in the nucleus and we read through that DNA base by base, ACGT, sequence by sequence, and then we translate it from one character to the next and then we transcribe it, meaning that we take the DNA and we take those individual bases and we find its complement in RNA. This becomes ACG and you and new characters. This RNA then matures into what we call messenger RNA, which gets sent out to all different parts of the cell. This is what we get as the communication source for synthesis of proteins for different factories of the cell. In general, the original source information, the genetic sequence and the sum of all DNA found in the cell and in the nucleus is what we call a genome. From that genome, we can find that there are many different kinds of RNA produced and used whether it be messenger RNA, tRNA, rRNA, and so on. They can exhibit information from the genomes as we've stated and we can compare them and see how similar they are by aligning them. They can serve as communication between genes and protein factories so that we can actually get from the static source of our genetic sequence to the active source of cell interaction. We can describe the expressions of these genes based on those interactions and we can actually define behaviors and subset the processes by understanding how RNA works. If RNA can describe the expressions of genes within a cell, that means the cell can have an identical genome sequence, an identical genome, but as it's changing and as it's producing different proteins and as it's going to different factories and as it's communicating differently, the cell can display different behavior even with the same genetics. The collection of all this dynamic defined and transcribed RNA in these cells is collectively what we call a transcriptome from transcription. As it's dynamic and serves as a bridge between our DNA and our proteins, RNA can help us to investigate the differences between individual cells as single cell RNA analysis or in groups of cells or communities as we would see in bulk RNA. We can look at specific expressions of genes or sets of genes and interrogate them by themselves and look at how their sequences can compare. And finally, we can look at how interactions from RNA, we can actually profile different objects and different characters in the microscopic world, whether they be cells in our human body, bacteria, viruses and fungi. And as a note, as many of you have probably been affected by COVID-19, COVID-19 is an RNA virus, which means that the virus's entire genetic sequence is contained in a capsule and its format, its information format is RNA. So we've looked a little bit about how RNA comes to be, why it's important and how sequencing can play into that a little bit. But how does that relate to computing? It comes to computing where we actually need to process the information that we've gathered. RNA-seq processing is how we can quantify, compute and analyze the data that we've taken after we've left the wet lab. And after we've done our isolation of different samples, of different reads and different fragments, everything from our information to more information and inferences we want to gather can go digital. And the rust and the applications and rust are coming soon, I promise. From this basic understanding of the mechanisms of RNA and RNA-seq, there's a simple methodology that we can take. And that's we read the information from the files and the experimental samples that we've taken and turn them into data streams that we can manipulate. We map and align these data streams to reference data where they're applicable so we can take the information that we have, we can position them, and we can compare them, see the similarities and the differences, and categorize them. And we can finally analyze the output, depending on whether we want to quantify the categories and the expressions of different genes or different sequences, and we can send these results we have for further processing in other pipelines or other programs. Now RNA-seq tools are a broad spread. They can be focused on many different analyses or different methods to achieve analysis. And to name a few, some of them may be worried about the quantification, the categorization, and the analysis of expression of different genes and different RNA sequences within our data stream. And each have its own uses and advantages, but most are largely disjoint in terms of their programmatic tooling. We needed something that could actually bring the exposed functionality, so the command arguments, the positional arguments, and the general command line interface of these many different tools with many different functions together into one unified surface. And I needed a language that could do that. And that's where Rust comes in. With no need for further introduction, Rust is usually used for safety in the sense of both memory safety with Rust Barochecker and type safety within its entire type system. We look at performance at the level of systems programming, and we look at concurrency both in its primitives and in the ecosystem surrounding them. And all of these are helpful in this regard, but how would that apply to a biologist or a computational biologist in that sense? And the first thing that we can actually look at is building ergonomic abstractions and layers to this. Now due to the fact that we had multiple tools to work with, the initial starting point of our translation was about 3,000 lines of logic configuration and command line parsing. And some of it was easier than other areas, and all of it was generally not trivial to translate. And projects grow in size, and as with this one, so did the size of what needed translation. What originally started as about 3,000 lines of CLI parsing with a few tools growing from about 3 to 4 to 5 to 6 to 10 different tools, began to add more arguments and began to add more configuration. And so what originally started as about 3,000 lines of code at the beginning, then became more than 10,000 lines of code to migrate over and translate and unify in the end. Thankfully though, many of the options that we can translate at the level of Rust have both high level structures and primitives that are generally synonymous. And when they aren't, and when we want to configure further, we have macros, which enable us. This is one example of a direct translation where we take not only basic configuration values like verbosity, and we have flags for that, and we can take subcommands and other options. And if something is not relevant to the functionality that we wanted to find right now in the available abstraction we want to define right now, we can skip it. So thankfully to crates like struct.op and pico.args, what was collectively about 10,000 lines of mixed logic configuration and options ended up condensing by about six times, plus more for documentation. And in the end, what we got was quite a sizable difference. As you can see, to proportion, this is what would have been the size before and what would have been the size after in screenshots. But we don't just want an abstraction layer, we want to be able to interact with the tools that have already existed and the tools that have already been made around us. And for that we need interoperability. By the end of our initial abstraction and the command line layer, the resulting project looks a little bit like Rust on top with bindings to C++ and scripts almost all the way down. And thankfully to a few crates as you make or CXX by David Tolne, we're able to actually build a very systematic and almost self-contained structure for interacting with our files. We can take in the files, we can parse them and we can send them off for processing, whether it be C, C++, Python, makefile, Perl and other analytic languages. And we can finally destructure or serialize that data and bring it into further analysis for other pipelines. And many times we should leave the abstraction layer as it is. There's no reason to go further unless we have precedent. And sometimes that precedent is very large. To scale, they're easily 10 to the 16th to 10 to the 17th bases in some of the more popular public databases for RNA-seq and its associated data. That's more than 100 petabytes or several hundred thousand terabytes or several hundred million gigabytes or several hundred billion megabytes. And it's continuing to grow over time. When data can not only grow over time but can grow orders of magnitude in size just from the process of a single step in the pipeline, performance does matter. So in a way, we can actually think about sequencing and the general process of analysis in three distinct steps where we read the information, we parse the data, we map and align, and we paralyze operations. We analyze and we export the data that we need for further analysis. As for parsing, Rust has a very strong track history of parsing, whether it comes to crates like NOM, lexers like logos or PES, and so on. We can see that Rust actually has the ability to handle not only long strings and sequences, but bigger structures as well. And structured data is sometimes the thing that we most need. If sequence data were the simplest we could possibly conceive, we would have a continuous stream of fragments of bases joined together continuously. And realistically, we require more than just a continuous stream. We require more information than that and we require structure around it. Now, how does that structure look? One example would be the FASTQ format, where we take in not only the sequence information, which is crucial to our analysis, but also the identifier, which is the identification of what sequence we're looking at, the quality scores. So we know how well this is actually sequenced or how erroneous this is actually sequenced. And we can continue to process it further. And this is not the only format that is viable for computational biology and bioinformatics in RNA sequencing. We actually have quite a few, whether it's FASTA for genome and transcriptome, FASTQ for our experimental fragments, GTF for our annotations or BED, or files that can contain the alignments that we have calculated or processed. And parsing in Rust is not just a general feat. We actually do need some specific features to biological file formats sometimes. And we can actually measure this information. Probably to a professor, Hung Lee, at Harvard, we have been able to quantify some of these basic benchmarks for common analyses and parsing. Here we actually see the actual times it takes, where first and fourth most, Rust comes in and we can actually count the amount of sequences and the quality thereof that's contained in these FASTQs files. And if we actually take a closer look at how we can use this information, we can see that it's not that different from very simple or normal Rust code by the time it reaches the biologist. What we have is a reader and a record, and once we take in the buffered file, we can continue to loop over it and continue to print different sequence data and quality data. And in the same thing for the FASTAS benchmark, another version is very similar, where we parse from our file and we continue to take in new record information until we finish all the sequence. And once we've parsed all these files, we need to do basic processing to them, which includes mapping and alignment, the basis of most bioinformatics pipelines, and not all mapping and alignment is created equally. Some are better at expression analysis, some are better at quantifying different parts of RNA, some have better accuracy, and some are very, very fast. In one example that we could find from a paper for Callisto in near optimal RNA-seq quantification, you can actually see the variance in the level of speed, that is the performance, of different methods and different levels of analysis. And sometimes it can, on a normal machine, it can take as little as 15 minutes, and sometimes it can take several days of computation. In a more personal test, we tested with at least 64 gigabytes of RAM, sometimes 128, at least 30 million reads over multiple files, and we had enough computing power to feed a room equivalent to a lab full of thinkpads. And in the end, the tools with builds focused on FAST heuristics gave us actual reasonable answers in less than half an hour, some of them even within 10 minutes. But others, which relied purely on accuracy or purely on speed, which were made more accurate, went out of memory as we actually tried to get a proper answer out of it, even with these large computing constraints. So the commonality between these tools is that parallelism and efficiency is actually no longer optional in RNA-seq processing. It's an assumption of the field. And so in some of the rewrites of these tools, we had to defer expertise to designers of Rust systems and the community more at large. So the commonality between these different tools is actually that parallelism and efficiency with our time and our memory is no longer optional with RNA-seq processing. Most computers nowadays, even personal computers, let alone compute clusters, have more than one core. And it's a general assumption of the field in current years. So to actually work with these tools and to incrementally translate them into a language like Rust, I had to call in the experts. So we defer expertise to the designers of these Rust systems who can actually optimize and work with these systems at a very fundamental level. And the response included the standard library, actually. Standard library is cohesive and extensive to the point where we can get atomics, threading and streams together in a fashion that's actually accessible, both in its documentation and in its resolution with other parts of the Rust ecosystem within crates. And beyond that, we also had actual parallelism libraries, such as rayon, where we could take normally sequential data and we could place them into iterators and place them into transformations where we could parallelize the operations naturally and easily. And in the end, the data that we process, no matter how fast, no matter how much we parse, needs to go somewhere. It needs to be analyzed further. And sometimes Rust is not the only answer to a problem. There's a diverse ecosystem of languages and tools out there, whether it goes to scripting, whether it's for systems and whether it's for pipelines. And in the end, it boils down to the fact that biologists are not software engineers. We certainly don't want to rewrite the world in Rust. And there's actually a lot out there to gain from. What kind of language do we want to work with is less of a question of what do I want to stick to, but what can I connect and what can I interoperate? And classics of bioinformatics and computational biology, especially for RNA sequencing, include C, C++, Fortran-powered systems, and other languages such as Java, Perl, and analysis in R. And newer languages are also cropping up, such as Python, Julia, Go, JavaScript, and all scripting. And some languages you may have never heard of before, such as Futhark or Seek. So while there's certainly overlap between biologists and software engineers, the end goal is different. Biologists write software to best enable biology. And the tools that are existing and the tools that we can connect are the tools that we're going to use. So what biologists and scientists more generally can take away from good quality software is reusability, composability, and interoperability. And really, all three interact in a way where we can actually get stable software that doesn't need to change, that we can build upon and that we can extend, and that we can interact with at the level of different languages, such as scripting languages or systems languages. And lessons we can take away from rust and biology in the face of both parsing, parallelization, sequencing, processing, and analysis is that there is actually a very kind and extensive ecosystem, both with the tools that Rust gives and the communities that Rust has enveloped. This includes cargo, where we have an actual build tool similar to pip, similar to snake make, CMake, all brought together and cohesive in the sense that you can test, that you can make, that you can build, that you can run, that you can compile, all these different things and all these different tools and all these different crates together. And we have a crates ecosystem where if we know the Rust code compiles, we know that it will compile everywhere that Rust is. And in that sense, we can continue to build upon different crates and different tools and different libraries based on the assumption that we know it works abroad and across. And when something is not available in this ecosystem, and when something is so domain specific that we really need a tool from somewhere else, we not only have FFI, we not only have communication with the language at a fundamental level, but we have tools and we have different crates to actually abstract over this and to get a safe layer of ergonomic code that we can seamlessly transition between. So what becomes the next step for Rust and biology together? Well, the impact of Rust in the biological ecosystem is that we get a bridging at the level of languages. As we've seen before in different benchmarks and different tools and different tool kits, we have a plethora of languages at our disposal, some of them scripting, some of them web languages, some of them systems languages. And Rust really sits at the heart of the ability to take initial information not only at the level of simple CFFI and C bindings, but also at the level of safe abstractions to interpreters, to different compile targets, and to different information flow. And we can work with the community at large to continue to build these tools. Not only does Rust enable languages and different language tools, it enables the community to build tools around it to reuse, to extend, and to interact with Rust and languages around Rust at an equal and bilateral level. And so we can actually act with different software in Rust, not only in so far as the software itself, but also at the level of exchange from one software engineer to another, from one scientist to another, and build another community of mentorship for both science and software and build a larger picture. The biggest asset of the Rust programming language going forward may not just be the language itself, but also its community. And the community mentorship model is what biologists can continue to take and learn from Rust beyond the language, even as they go further. Thank you for joining into this talk. I hope you enjoyed it. We'll make further information available. And if you'd like to read more, both on biology, how Rust plays in, or computational biology, and different methods and algorithms to work with it, feel free to contact me. And feel free to look at the slides. Okay. Thank you.
|
Ever wondered what goes on behind the scenes of breakthroughs in understanding proteins, viruses, our own bodies, and more? Take a deep dive as we journey through some of the workings of computational biology at large, along with its advantages and pitfalls. In this talk, we will see how Rust bridges the biological sciences with safe, performant, and scalable systems, and discuss how you can play a role even as a fresh Rustacean.
|
10.5446/52198 (DOI)
|
Oh, hi there. I didn't see you. Well, I'll tell you a story. Twenty years ago, in the year 2000, a game came out for the Nintendo 64 called Hey You Pikachu. For Christmas that year, I was hoping to get the special Pikachu Edition N64, but my mom told me that in a few years I wouldn't even still be into Pokémon. Here I am, twenty years later, still playing Pokémon, and today I want to share that love with you. And mom, this talk is for you. Hey everybody. My name is Sean Griffin. Text encoding is hard, so sometimes it's spelled like that. My pronouns are they, them. Let's talk about Pokémon. The first Pokémon game was made by a small team for Japanese audiences. The game was made on a tiny budget, and the programming team was only four people. In 1996, Pokémon Red and Green were released, and sales vastly exceeded expectations. Later that year, an updated version was released in Japan with improved graphics and more polish. It was clear that this game was far more popular than anybody expected, there was a mad rush to localize it for international audiences. Two years later, in 1998, Pokémon Red and Blue were released to the rest of the world and would go on to be the highest grossing media franchise of all time, eclipsing even Mickey Mouse and Hello Kitty. In fact, it became so popular that even if you've never played Pokémon, I'll wait until you've seen this one before. This is an actual picture of Ryan Reynolds from 1998. Okay, not really. This is Pikachu, by far the most famous Pokémon, but there was a close second. This is Missing-No. Missing-No is a glitch Pokémon, and you can only encounter it through a glitch, but the thing is, everybody knew about this. One of the things I find so fascinating about Missing-No is just how widespread it was. In a survey I ran, 87% of people who owned the game knew about the glitch when it was relevant, and 80% of those people heard about it through word of mouth, not the internet. And there was a good reason. Missing-No could duplicate items. Now, it's glitched out a lot of names, the Missing-No glitch or the item-duped glitch. At my school, it was called the Rare Candy Glitch, since most people all used it to duplicate an item with that name. It made your Pokémon more powerful whenever you used it, so it was a really desirable item to duplicate. Let's take a look at how you performed the glitch first. We're going to start off in Viridian City, one of the earliest areas in the game, and we talk to this old man, he's going to ask if we're in a hurry, and we're going to tell him no. In response to that, he's going to say, oh cool, why don't I show you a tutorial about how to catch Pokémon? And he's going to go into this battle and find a Weedle, and he's going to attempt to catch the Weedle. We actually are in a little bit of a hurry, so we're not going to sit through and watch this. Next, we need to fast travel to an area called Cinnabar Island. Now, this being the Generation 1 Pokémon games, the way fast travel worked is we transform into a bird and just fly away. Once we get there, we want to go onto this water, so we're going to do that by opening up the menu, and this being the Generation 1 Pokémon games, we're going to transform into a giant seal thing. I don't really know. If we go up and down this coast, and eventually we'll get a Wild Encounter. If you played these games, you might notice this pause right here is way longer than it's supposed to be, and we're going to go into why that is a little later on. So here we see our friend missing, though. That's level 168, which is higher than you're supposed to be able to encounter in the game. The maximum level is 100. We're going to immediately run away from missing, though. Open up our inventory, and when we go down to the sixth slot in our inventory, which is where I had the Rare Candies, we'll see that I now have Flower 2. I had four Rare Candies in my inventory before this started, and this clearly was not meant to render numbers larger than 99, so I guess the way it renders 13 is Flower, because 4 equals 128 is 132. If you've never seen this glitch before, this probably seems like an extremely random sequence of events for such a specific outcome. And it is. But let's break down each piece of this. As with most major glitches, there's no single bug that's responsible here. This happens because of a bunch of different bugs, and in most cases you can't even really call them bugs, just properties of the code being used in unexpected ways. Now, while I stayed up front, I did not work on this game, nor have I interviewed the programmers who did. I have spent a lot of time looking at disassemblies of the game, and I think we can infer a lot about what was intended from reading the code and knowing about the constraints that they worked under. But I want to make it clear that a lot of this is speculation. With that out of the way, let's start going through each of the pieces of this glitch. I'm going to go through them in the order that I think they were initially discovered. So the first thing you might be wondering is, what's up with those coast tiles? Why do we go to that spot specifically and go up and down the coast? In the Pokemon games, there's sort of a grid system that the player occupies. This is one tile that the player is standing on, and they can move one tile up, down, left, right, et cetera. This is that same tile when we remove the player from it. Now, even though from a gameplay point of view, this grid system occupies single tiles, the code actually sees the game a little more fine grained than that. It sees this tile as four sub-tiles. And for the tile the player is standing on, these are the coordinates. The upper left sub-tile of where the player is standing is coordinate 8, 8, and the bottom right is coordinate 9, 9. Now whenever you're moving along these tiles in the game, it's going to continuously be checking to see if you can encounter a wild Pokemon. There's two main ways that you can encounter Pokemon in the first generation of games. You could be surfing on water, or you could be walking through tall grass. There was also fishing, but it worked completely differently and is unrelated to this glitch, so we're just going to pretend it doesn't exist. This is what the code for that looked like. First thing we're going to do is load up the tile at 9, 9, so the bottom right sub-tile where the player was standing. Then we're going to check to see what kind of tile it is. We need to figure out how likely we are to run into a wild Pokemon. If the tile is grass, we're going to use the grass encounter rate for the current area. If the tile is water, we're going to use the water encounter rate for the current area. If it's neither, then the player cannot have an encounter with a wild Pokemon here, so we just exit out. Then we actually need to compare this to a random number generator, which isn't actually random, but we're not going to get into that today to determine if we actually get an encounter. I've left that code commented out here because it's not actually relevant to why this bug occurs. Next they're going to load up the tile the player is standing on again, but this time they're loading the bottom left sub-tile. Then we're going to determine which kind of Pokemon they can encounter. If the tile is water, we're going to select one from the list of water Pokemon for the area, otherwise we're going to select one from the list of grass Pokemon for the area. Because they load up the tile the player is standing on twice, and the second time they use a different sub-tile, whenever we're on a tile that looks like this, where the right side is water and the left side is land, every time it's doing one of these checks, it thinks we can do an encounter because we're on water, but because the bottom left sub-tile is not water, it's going to load up the grass Pokemon. When we look at this bug in Rust, I think this is the easiest bug that we're going to look at today to scoff at, say this should have been caught in code review or I wouldn't have written this. It really does stand out in Rust. We're assigning the same variable twice and there it's on the same screen and it's different numbers. This just sticks out like a sore thumb to me. But they didn't write this program in Rust, they wrote it in assembly. And Rust loses some of the nuance of what happened here. You wouldn't have written this code in Rust in the first place. There's absolutely no reason that you would have assigned tile a second time. But in assembly, you don't just have however many local variables you want. When you write a program in Rust, the compiler is going to determine where to store every variable that you write. It's either going to assign it to a register, sort of like a global variable that your CPU uses, or it's going to put it on the stack. The Pokemon games, they did have a stack, but it was really tiny, only 207 bytes. So they basically never used it unless it was absolutely necessary. The main place it was used was for audio playback. Now in this bit of code that I commented out, this is where a lot of the context gets lost. First of all, these two lines just don't appear on the same screen. In the assembly version of this, they're actually about 50 lines apart, so you wouldn't see them both at the same time. That alone to me makes it much more reasonable that this would have just slipped through code review. If I can't see both of these at the same time, I'm much more likely to just not spot that. Now the Game Boy used a variant of what's called Z80 assembly, and we're not going to go too much into the minutiae of what that means. What's important about the differences between various assemblies is different types of assemblies have a different number of general purpose registers, the sort of global variables that your CPU can use for literally anything. And on the Z80, they only had four registers that were truly general purpose. And in this code that I commented out, they used all of them. So they had to load up the tile again. They could have stuck it on the stack maybe, but you don't want this to be the one place where oops, we don't have enough stack anymore. So they just reloaded it. And that seems perfectly reasonable to me. Frankly, if you had to work on this constraint, could you write your whole program with only four global mutable variables and nothing else and avoid bugs like this? I sure as hell couldn't. OK, so the end result of all of this is on these tiles specifically, the game tries to have us encounter a wild Pokemon because it's water, but we instead encounter grass Pokemon. But that's not by itself particularly useful. So let's move on to the second piece of this. Whenever you enter a new area, the game has to load up the encounter tables for the area that you're currently at. There's one spot in memory that's just this is the current area's grass encounter information. So the first thing it's going to do is grab how likely you are to encounter grass Pokemon in the current area. And it's going to grab that from just some global section of ROM. I've made that a constant here in the Rust translation. And let's go check if that number is greater than zero. And if it is, it's going to copy the encounter table over. And that does the same thing for water. What's important here though is what happens when the grass encounter rate is zero. When zero, we just don't do anything. It doesn't zero out the table, doesn't replace it with some dummy data. It just leaves whatever was there before. So this means that in these areas with these coast tiles, as long as that area doesn't itself have grass Pokemon, we can use this to encounter grass Pokemon from other areas. And that's why we specifically do this glitch on Cinnabar Island as opposed to anywhere else in the game that has these coast tiles with the land on the left side. Because we can fast travel to any town and Cinnabar Island is a town, it's easy for us to fast travel there and get to this coast in particular without ever passing through an area with grass Pokemon. So that means as long as we can use the game's fast travel system, we can use this glitch to encounter the grass Pokemon from any other area. Now that by itself isn't necessarily the most useful thing in the world. It was used for something very specific. There's a place in the game called the Safari Zone where there's a bunch of really rare Pokemon that you kind of want to get, but it uses its own special encounter mechanic that was really annoying and everybody hated. But with this glitch, you could just go into the Safari Zone, travel to Cinnabar Island and go on these coasts, and then you would be able to encounter the Pokemon from the Safari Zone with the normal wild Pokemon encounter mechanics. This was really useful and this wasn't as widespread as Missing Know, but a lot of people did know about it and this was called the Fight Safari Pokemon Glitch. We don't just want to catch Safari Zone Pokemon, like that's cool and all, but we want more. We want 128 rare candies. If we're going to do that, we need to do this glitch when the Grass Encounter Table contains information that isn't a real Pokemon Encounter Table. Now when this glitch was originally discovered, it actually wasn't done the way I demonstrated to you. It was done by trading with an NPC in the lab on Cinnabar Island. NPC stands for non-player character. The Pokemon games have a system where you can send a Pokemon to another trainer and get one back in return, and there are a few NPC characters who will do this without you having to have any friends in real life. Now this game was really constrained on memory and everything had a very specific spot in memory where it was stored and oftentimes those addresses were reused and this is one of those cases. Whenever you traded with another player or NPC, that person's name was stored in the same spot as the Grass Encounter Table. So to see why this is useful to us, let's look at how Encounter Tables are actually stored in memory. So like I said, everything in this game just has a very specific address and the address for the Grass Encounter Table for the current area the player is in is D887. That 0x means it's hex notation. Now this is a table with 10 entries. The first byte is the Encounter Rate, how likely you are to run into a wild Pokemon. The lower this number is, the more likely you are to have an Encounter. Then we have for each of these 10 slots a pair of two bytes. The first byte is the level of the Pokemon in that slot and the second byte is the ID of the Pokemon in that slot. And each of these slots have a fixed percentage chance of running into them. So the first two are about 20%, the next one's about 15%, and then about 10% and so on and so forth. So let's take a look at what a real Encounter Table looked like. Address CFA3, you'll find the Encounter Table for an early game area called Mount Moon. So we copy over the Encounter Rate, 10 is really low so we're likely to run into a lot of Pokemon. And then the three most common Pokemon that you'll run into is the Pokemon Zubat, and then after that that looks like a rocky Zubat to me. I don't know, so then we finish this once we copy over this whole Encounter Table. Actually for this one in particular something might stand out to you. It's all Zubat! I don't know, I guess the game developers were like, hey should we maybe put some Pokemon here and they're like, oops, nope, all we've got is Zubat. It's all we've got, sorry. And so kids would go through this area and every two steps they would see another fucking Zubat and they would go sleeping in their dreams. All they could see is these whores of Zubat and they would hear... Actually I'm sorry y'all, can you hold on one second? Hey! Hey! Have you not seen the news? There is a pandemic going on outside. Come on! Thank you! Sorry about that, Zubats I swear. Alright, so as I said, once we've got the Encounter Table copied over, this is what it looks like for Mount Moon. But of course we don't want to run into a lot of Zubats because that would just make me sad. So let's look at what happens when we copy over this trainer's name. Now all NPCs I can trade with you in the game are for some reason just named trainer and nothing else. So we're going to look at what happens when you copy over the string trainer. Now Pokemon Blue used its own custom text encoding. So trainer is actually only a single byte. There is a control character that is print the word trainer. And then the end of name marker control character is 80 and decimal. So we copy this over, we copy over trainer which is the first byte and that just gets ignored because we're using the water encounter anyway. And then 80 happens to be the idea of missing no. And so this is a really great way to do the glitch because you get an Encounter Table that's just all missing no, all level 80 missing no specifically. And it's great but the problem with doing it this way is that when you trade with an NPC you can only do it once. And once you've traded with that NPC you can never do it again. So you can encounter a lot of missing no's this way but as soon as you go to an area with grass Pokemon you're not going to be able to do the glitch this way again. So we want something that we can actually repeat because we don't just want 128 rare candies. We want 128 rare candies as many times as we want because rare candies are delicious. So that's where the old man glitch comes into play. Now it's called the old man glitch because well his name is old man. And it's really this what his name specifically is isn't that important. What's important here is that he has a name. Because this game was optimized for code size they didn't want to have the code be too large on the cartridge or they might have to double the size of the ROM available to them on the cartridge. And given the budget they were working under they just couldn't afford to do that. So everything was optimized for code size. So this tutorial could have been implemented as go to a completely separate piece of code that goes through this tutorial or you could do what they did which is have it go through the normal battle code and just add like three or four conditionals to the battle code of hey is this the tutorial if so don't accept player input here. Now the problem with doing that is there's stuff there's code in there that does things like print the player's name. But since the player is not in control here we don't want to print the player's name we want to print old man. So they need to copy over old man to where the player's name is stored. Which means they need to store the player's name somewhere. And where do they decide to do it? You guessed it the grass and counter table. So for this demonstration I set my name to hi Sean and let's take a look at what happens when we interpret that as an encounter table. So the H is going to get copied over into the rate and that just doesn't matter again because that's just ignored. And then I is 168A is hey that's a missing no. And then another 168 and hey another missing no and then we got level 174 and wouldn't you know it they're all missing no it's almost like I specifically picked a name that only had missing no characters in it to make it easier to demonstrate the glitch to y'all. What a coincidence. Now of course each of these missing no came from a different letter in my name. Which is weird because those are different IDs and so therefore they should be different Pokemon they shouldn't just all be missing no. Let's talk about what even is a missing no. Contrary to what you might think it's not a single Pokemon. They're actually 39 distinct Pokemon which are called missing no. And it's not just reading garbage data. Even though it's sprite is clearly garbage data. It has a well defined name it's printing missing no there not just random garbage. And it's not like in the code they could say hey is this garbage if so print missing no you can't really detect what garbage data is at run time so this is clearly in a table somewhere that it's expecting to print this word out. And a lot of other attributes that has are well defined as well. To understand why some of its attributes are garbage but others aren't we need to see how Pokemon are stored in the code. When most people think of a list of Pokemon they think of the order that they appear in the Pokedex the in game encyclopedia. Every Pokemon has a number associated with it and they're loosely ordered in the order that you would encounter them in the game. But that's not how they're stored in the game. When most people think of Pokemon number one they think of Bulbasaur because that's the first in the Pokedex. But actually the Pokemon with the ID of one is called Rhydon. In the code most of the data related to the Pokemon is stored in the order they were originally created. The game was supposed to ship with 190 Pokemon at one point in development. 40 of those got either cut or saved for another generation and then one got added at the very last second. And so missing no is what's stored in the slots where the cut Pokemon were supposed to be. For the most part those entries where missing no's data is is just zeroed out. It's all zeros. There's some exceptions like its name but for example its Pokedex ID is zero. Its cry, the sound that it makes when you encounter it is almost always zeroed out. So for anything that's ordered by internal ID we're going to get well defined but zeroed data. But for anything that's stored in Pokedex order we're going to get garbage. Let's look at why that is. One of the things that's stored in Pokedex order is the Pokemon based stats table. This includes things like its attack and defense in HP, what it can evolve into, what moves it can learn, and importantly it also contains the pointer to the sprite. So the first thing the game is going to do is look up the Pokemon's Pokedex number from its internal ID and for missing no that's zero. Now because Pokedex numbers start at one they subtract one from that to turn it into an index into an array. This is an unsigned integer so this is going to underflow and we're going to get 255. Or you could think about this like we were trying to get the Pokedex entry for a hypothetical 256th Pokemon. Now the array that we're indexing into only has 151 elements in it so we read way past the end of the array. And in the case of missing no's sprite where it ends up reading is the middle of some data for NPC trainer parties in an area called Route 17. And when you interpret that as a pointer it points to some code related to the Safari zone. And so missing no's glitched out sprite is what you see when you interpret that code as if it were an image. But most data in the game isn't stored in Pokedex order. That's the exception not the rule. Ironically the Pokedex itself is one of the things that isn't stored in Pokedex order. So missing no even has a valid Pokedex entry. Well almost. You can see it's got a name and a description and a height that looks like a placeholder but then the weight is just sort of a random number. This entry wasn't localized and the structure of Pokedex entries in the Japanese version of the game was a little bit different than the English version. If we look at the Japanese version though we can see oh yes no there's clearly valid data here for this version of the game. It's the question mark question mark question mark Pokemon. Its weight is 10. Its height is also 10 because height was in decimeters for some reason. And then that description translates to comment to be written. There are some differences between the different missing no's though. In fact a lot of them have unique data. The cry that a Pokemon has, the sound it makes when you first encounter it, it stores three bytes. One that's just sort of the base sound and these are shared between multiple Pokemon and then another byte for a pitch adjustment and then another byte for speed adjustment. And nine of the missing no's have cries that aren't zeros. And a few of those actually have base sounds that aren't heard anywhere else in the game and this supports that missing no's. Some of these missing no's are in fact Pokemon that were just cut during development. There's also a few places in the game where they needed to display a sprite as if it were a Pokemon but the sprite they wanted to display isn't associated with any real Pokemon and so some of the missing no entries are where they store those sprites. These will only show up if you had a lowercase w, x or y in your name though so most folks never saw these. And this is really important, which version of missing no you saw was based on your name. This is also why if you did this glitch, in addition to missing no, you would see some high level real Pokemon. Printable characters in Pokemon Blues text encoding start at 128, sort of the opposite of ASCII. So no matter what your name was, the characters that would end up in the level spots for the encounter table would be higher than you're supposed to be able to reach in the game. They would always be higher than 100. You could also get some glitch trainer battles this way but those would only appear if you had punctuation in your name so most people were unaware of that. I certainly had no clue that that was a thing until I started doing research for this talk. Now you might be asking, if the encounter table was based on your name, why could everybody do this glitch? Surely it would be possible to have a name that didn't map to missing no at all. This is sort of true, it was possible to have a name that didn't include missing no. But even if that was the case, you could still get 128 rare candies. And it was pretty unlikely that you would have a name that didn't include missing no for a few reasons. The control character used for the end of your name was stored as 80 in decimal, which is one of the ideas of missing no. So if your name was an even number of characters, you could always encounter a missing no. And a lot of players didn't even pick their own name, they just used one of the preset ones the game offered you. By pure luck, every single one of these names has the right characters for a missing no encounter. Except it wasn't even luck because the missing no characters were really common. They included uppercase S, H, and M, and most lowercase vowels. So the odds of you having one of these in the right place were really high. But even then there was a catch all. Every custom name could at least encounter missing no sister pokemon, Tick M. And we call it Tick M because those are the only characters in its name that you can actually say. Now even though they have the same sprite, Tick M is different. As you can probably tell from the weird characters in its name and its decision not to wear a mask, everything about Tick M is garbage. The graphics of that pure in its name are going to be based on things like your party stats or your position on the map. Tick M is what you get for internal ID zero. So you're going to get garbage even for data that isn't in Pokedex order. Even when it's looking up by internal ID, it's going to underflow and read past the end of whatever data it's trying to read. Now Tick M had some interesting differences from missing no. Its cry being garbage data would randomly change based on what screen you're on. It could evolve into Kangaskhan. So you know, I guess this is what a baby Kangaskhan looks like. You could also lock up your game by catching it. But if your goal was to get just get 128 rare candies, it didn't matter if you saw a missing no or Tick M. So now let's talk about why the sixth item in your inventory gets duplicated. This has to do with what happens after you encounter a Pokemon. This all comes back to that Pokedex that we mentioned earlier. Its function in the game is to keep track of every Pokemon you've seen or caught. Any Pokemon that appears on this list is one that you've seen before. And the little ball icon next to its name means it's been caught. This is stored in memory as a bitmap. One bit per Pokemon. It's sort of like an array of booleans, but instead of each entry in the array taking up one byte, it takes up one bit. So one byte represents eight entries in the array. As you might have guessed, this array is stored in Pokedex order. So when you encounter a missing no, it tries to mark that you've encountered a hypothetical 256th Pokemon. But since there are only 151 Pokemon in the game, this ends up writing way past the space used for this. The bitmap has to be rounded up to a number that's divisible by eight since it has to fit in a byte. So what ends up happening here is there is 152 bits for real Pokemon, and where it ends up trying to write is the high bit of the 13th byte after the end of your Pokedex. Now the inventory is what's stored immediately after your Pokedex in RAM. The inventory is stored as one byte for the number of items that you have, and then for each item, there's one byte for its ID and then one byte for its quantity. So that means the byte that it tries to write to corresponds with the quantity of the sixth item in your inventory. And another way of saying it sets the high bit of the quantity of the sixth item in your inventory is it adds 128 of that item as long as you had less than 128 before. Now one other side effect of encountering Missing-Know is if you had beaten the game when you performed the glitch, you'd notice that the place where it stored the team that you used to beat the game was now corrupted. This is caused by Missing-Know's sprite. Remember when I pointed out that the pause at the start of the fight was abnormally long? This corruption is why that happens. Due to the amount of space they needed to decompress these sprites, they can't do it on the console's RAM, so instead they do it on the cartridge's persistent storage. The space they use for this is large enough for a 7x7 sprite, which is the largest that appears in the game, but the data that represents Missing-Know's sprite says that it's 13x13. So they write way past the end of that buffer and the next thing on the cartridge's storage is the Hall of Fame. But because Missing-Know's sprite was read from ROM, not RAM, that means that the sprite data never changed, and so everybody who did this glitch would see the same corrupted Hall of Fame. Although you wouldn't know it immediately because some of the names of the Pokémon would include things like the control character for printing your rival's name. So a lot of people would see an Amonite named Gary because Gary was a really common rival name, but it would change a little bit. Now this bug would have been avoided if there was some Bountains checking in the sprite decompression code. But like I mentioned before, everything in this game was optimized for code size. If you're only dealing with a known set of trusted inputs, omitting these checks seems perfectly reasonable. When the code received real sprites, it always behaved perfectly. The only reason this code misbehaved was because of a completely unrelated bug that caused it to get garbage data. Now those are the only two abnormal effects of encountering a Missing-Know compared to encountering any other Pokémon. Remember that the main way this glitch spread was through word of mouth. That means that there were a lot of untrue or half-true rumors that spread around and I'd like to debunk a few of those. The biggest piece of misinformation you might have heard is don't catch Missing-Know or it'll corrupt your save. And this is just straight up false. There's no ill effects of catching Missing-Know and there's really nothing about it that can't be saved normally. I think the source of this misinformation was a very specific problem that can arise with Tick-M. In the games you can bring up to 6 Pokémon with you in your party. If you catch another one when your party is full, it gets sent to a storage system. And when you open up this storage system later, the game has to recompute the stats for all of those stored Pokémon. And there's a bug in this calculation where if it tries to compute them for a level 0 Pokémon, it gets into an infinite loop. Now you're never supposed to be able to encounter a level 0 Pokémon, but if you did this glitch with a custom name, you could always encounter a level 0 Tick-M. You would always occupy the bottom two spots in that encounter table. And since at the point you did this glitch, you probably had 6 Pokémon in your party, that means it probably went to storage, so I think this was the source of that rumor. Another thing you might have heard is that catching Missing-Know would cause all sorts of graphical glitches. Nintendo even put out a statement saying to try releasing it to fix the scrambled graphics. If that doesn't work, you need to restart your game, and just all of this is nonsense. There's a specific mirroring effect you can cause if you view the stats screen from Missing-Know. What happens here is on the stats screen, the sprite for the Pokémon is displayed mirrored, and there's a byte that says just render this sprite mirrored. And for whatever reason, when you view this screen from Missing-Know, it doesn't set that byte back to 0 afterwards. But only front-facing sprites are supposed to be rendered mirrored, so any time it's a sprite that represents something's back, it gets this weird jagged effect. But this would go away if you viewed the stats screen for any other Pokémon, because it would set that byte back to 0 correctly. There were some bigger glitches if you had Missing-Know in the follow-up game, Pokémon Yellow, but in that game they also fixed the bug that let you encounter Missing-Know in the first place, so I don't think that's the source of this. Finally, encountering Missing-Know wouldn't save your game. This is a really weird rumor, and I'm surprised it even got started because it's so easy to verify as false, you just do a Missing-Know encounter and reset without saving and see yeah no that did not save. I think the source of this one is an N64 game called Pokémon Stadium. It included an emulator and let you play the first two generations of Pokémon games, and whenever the cartridge's storage was written to in that emulator it would display the word saved on screen. So when that buffer overrun happened, that corrupted the Hall of Fame, that would cause the N64's emulator to display saved on screen, and I think that's where this one came from. So now that we've seen every piece of this glitch, we can see that it was just a bunch of small seemingly benign interactions between unrelated bits of code. No individual piece of this glitch stands out to me as insane or something that obviously would have been stopped in code review. When you combine all of this together, you get one of the most famous glitches of all time, but it's not the result of some horrendously bad coding or lack of QA or any of the other things you might hear people say about this game. Every piece of this glitch by itself was relatively benign, or just due to completely unrelated parts of the code interacting in ways that nobody would have expected. And this was handwritten in assembly under massive space constraints. Every instruction mattered. I certainly don't think I would have done any better than they did, and I don't think anybody watching this would have either. A phrase that I've heard from folks making fun of the glitches in this game is completely broken. I think we should just remove that from our vocabulary entirely. In this case, and many others where you would try and use that terminology, it's more likely the software is developed under some constraints that you weren't aware of, and you wouldn't do better in the same circumstances. Sure these days they're less likely to be technological constraints, but every single one of us worked on a project where two days before the deadline the requirements change out from underneath you, or your company suddenly pivots and now you do medical services and you have to figure out how to make a bunch of code relevant for that. To me though a lot of this glitch just boiled down to because assembly. It's really easy for us to take the technologies we have at our disposal today for granted. Today code size is rarely a hard constraint. You're unlikely to ever work on a project where this binary has to be 27k or smaller or we can't ship at all. When code size matters today, it's usually because of CPU caches and it's a thing we find while optimizing our code. And we run our code on machines powerful enough to just include all sorts of safety checks and never give it a second thought. But in 1996 just use Rust wasn't an option, and even using C wasn't an option. I'm really glad that we don't live in that world anymore. There's a really high quality disassembly of the game available which I used to research this talk called Poker Red. It doesn't have the comments in it that the real source code would have, but this team took the machine code from the game, disassembled it and went from there and figured out where all of the labels would have been and where they would have used macros and turned it into something that resembles real source code somebody would have written. It was an amazing project and it was invaluable for preparing this talk. So a huge shout out to the team who worked on that. I also want to shout out the organizers. This has been a great conference and doing a virtual conference like this is a lot of work. I want to give a special shout out to Nell Shamrell who led the program committee and made it a point to do multiple run throughs of every talk with the speakers before they presented them. That means that you got a much higher quality conference than you would have otherwise. So thank you so much. Finally, this talk was co-authored with my partner Tess and she also helped me with a ton of the slides. Tess, I love you. Thank you so much for working on this talk with me. It was a blast. If you have any questions, I will be in the Discord immediately after this. If you're watching this in Europe, if you're watching this in Europe live, go to bed. It's like 3 a.m. What are you doing? If you're watching a recording and you want to ask me a question, here's my contact info. Feel free to reach out and I'm happy to answer any questions you might have. Thank you so much for watching and bye.
|
Closing Keynote by Siân Griffin
|
10.5446/52199 (DOI)
|
Hi, I'm Harrison Backrack. I typically say I'm a software engineer during introductions, but chances are high that you're a software engineer too. So I guess I bake. But I guess everyone is baking right now. So to clarify, I bake Hala. Here's some Hala before it's edible, and here's some Hala while it's edible. If you haven't heard of Hala, it's like that scene at the end of Tangled where all the kids get together and braid Rapunzel's hair so it's not dragging everywhere, but unlike hair, it's not gluten-free. I actually had a frame from Tangled in here, but I was worried about the stream being taken down, so that's why you have some generic stock footage. So if you're like me, you've probably gotten really passionate about programming at some point for some reason and wanted to do a side project. Cool. Me too. It happens to me all the time. And things always kind of fall apart, right? First you have an idea. A lot of us are spending a lot of time indoors all of a sudden. So what if you were to start an indoor plant collection? Except if you're like me, you forget to water these things or forget which plant needs what sort of light, and all of a sudden you go from this to this. Don't think anyone needs this kind of vibe in their home right now. So can software help? I mean, let's be real. Probably not. Software can be pretty bad, but hey, maybe. You know, I've been looking to make something in my spare time. I'm primarily a front-end developer, but I've been hearing about Rust Web Server projects like Actix and Rocket. Maybe I can make a web app or something. Yeah, yeah. You'll be able to upload pictures of your plants over time. You'll be able to give them names. It'll tell you when they need to be watered, which soil type they need, and where to find that soil. Tons of features. Great. So you get started and work on it until 3 a.m. and you get to the point where you can create a plant and you deploy it to Heroku and the automated tests, run on push, and some time passes. And now the thing that was so much fun for an evening is a stub of a program and you've lost all interest or enthusiasm. So I feel like this is probably a familiar experience for a lot of you. It's a very familiar experience for me. So what happened? We're going to come back to that at the end of this talk, but first, let's talk a little bit about Rust. I forget how I found out about Rust, but I was intrigued as soon as I got the pitch. I really like powerful static type systems, which is funny because all of the languages I use daily are dynamically typed. Though maybe I've just been burned too many times by my own silly mistakes. Who knows? So I'd read the Rust book a few times before off and on, but I'd never really made anything with it. Partly because I didn't know how to start and partly because I was intimidated, but eventually I saw how easy it was to make command line applications and that got me working my first Rust program route. Side note, thanks to all the people who made this section of the site about what to make with Rust, I went to the homepage for the first time about a year after initially being introduced to Rust, not knowing what to actually build. I found this information super useful in getting up and running on my first project. Right, so this is about solo projects. So let's talk about the solo project of mine that actually worked. Root, or if you want to call it, RUD, is a pretty straightforward program. Think of it like tree, but instead of directories and files, it works on data. For example, you can input some JSON that you might be using to represent a tree of some sort. Here we have a root node with the name cool, a single child node with the name beans, and a single grandchild node with the name man. But that's kind of hard to see. So root takes that JSON and converts it into a diagram like this. Cool beans man. The reason I'm talking about root is that it took almost two months to get it into its current state. If you're thinking that's a long time to make a program to do that, I think you're right. I'm very much a beginner when it comes to Rust, and it's very possible that many of you could get this done in a single evening. Now, I've done small one night projects before. I made a widget for making digital flowers to send to people. I made a dumb two-headed version of Snake with terrible controls. And both were fun. They fall into the same issues that come with their gardening app from earlier. I never came back to them. So after working on root for weeks and weeks, I started asking myself, why did this project turn out differently? What was different? Well, it became a habit. For a lot of us side projects are something that we want to do in the abstract, but have trouble doing in the moment. If you've ever come up with nearest resolutions and almost inevitably failed those resolutions, that concept might be familiar to you. And that's probably most of you assuming you had any. So what goes into a successful habit? Firstly, you have to do it. Of course, that's a lot easier said than done. Now we're about to step into a big vat of psychology research. And I want to give a huge disclaimer that I'm a software engineer, not a psychologist or a mental health professional. I'm also not a doctor nor a lawyer. And it's kind of a bummer because both of those careers are much easier to explain to your parents. So in prepping for this talk, I read this paper called Promoting Habit Formation. And in it, I found this super academic sounding quote, intentions to act are significant precursors of initiation of behavior, but intention translation is imperfect. What the heck does that mean? Well, just because you want to do something and you know that thing that, and you know that you want that thing to be done, doesn't mean that you'll actually do it when the opportunity arrives. There are a lot of reasons why people abandon projects, but a major one is not feeling satisfied. Now, this may seem silly to say out loud, but how satisfied you feel by working on a project is a big factor in whether or not you'll stick with it. If building a project, building your project isn't satisfying, you might want to reflect on that for a bit. Why did you think it would be satisfying if it was satisfying before? What changed? Not every minute is going to be super fun, of course, but if it feels really dull for a while, it may make sense to step back and consider how to address that or how to move on to something that might be more satisfying for you. One thing to note here is that you can encourage your satisfaction. You may not think about how much you improve on a skill while working on a project, but that reflection can be super uplifting and help you see value in the work that you've already done. Now, it all starts with a good goal. What do you want to do? You'll often come up with a simple goal at first when starting on a project. You might write it down or just have it in your head, but the work between you and your goal may grow. This is often how things get unsatisfying. No longer feels like the work that you're doing is actually progressing you towards that goal, which is the thing that got you into this, into the first place. Coming up with unrealistic goals is really easy. For example, aiming to catch all Pokemon is a ludicrous goal to start out with. Now, I feel bad for making fun of a child, but Ash Ketchum is an absolute fool. Do you know how many kinds of Pokemon there are? Almost 900. So, let's say we have some realistic goal. How do we make sure we actually do the work? One common idea in habit formation is something called automaticity, which is basically the act of doing something automatically, without meaning to do it or even being aware of it. For example, for some of you, flossing your teeth is often an automatic action that you don't actively plan out or think about. Now, you probably can't write code as unconsciously as you can floss your teeth, but you can start that way. That is, you can't write a test without thinking about it, but you can sit down in front of your computer and open up your editor pretty automatically. So, when should you work on your project? It's very common to pick a fixed time to start working on something. For example, I will go to the computer at 7pm. But fixed times aren't really how we structure a lot of our day. Instead, our day is broken up by events into periods of activity. You take a shower, you clean up the table, you finish eating dinner. The gaps between these periods are really significant and tend to stick to your mind much better. These are the times to schedule working on your project. But when those events happen, what do you do? Even when you remember to work on a project, starting can be daunting. In order to make a habit, having a plan helps a lot. The research is pretty clear here. There's one type of plan that's super helpful called implementation intentions. However, that name is a pain to say, so I'm just going to call them if-then plans. They have a form like this. If I encounter x, then I will do y in order to reach goal z. Let's say you want to start a habit of drinking more water. And if-then plan that might help is if I get out of bed in the morning, then I will fill my water bottle and put it on my desk. The idea is to get simple tasks into your routine so that the harder tasks, like remembering to drink throughout the day, get easier. For software projects, getting your environment set up, like opening up your editor and project tracker, can make starting work a lot less scary. Now, in working on your habit, there are likely going to be some struggles, whether it's difficulty in building the habit in the first place, frustration when writing code, or uncertainty on how to make important architectural decisions. You can create what are called coping plans on how to deal with these issues when they crop up. Research has found that coping plans, when applied to physical exercise, had a dramatic effect on the endurance of those habits. Of those in the study that made specific plans on when, where, and how to exercise, two months later, around 44% were still performing at least an hour and a half of exercise per week. However, for those who had made coping plans, 71% were still working out that much. Forming concrete plans on how to deal with these roadblocks is super valuable for success, but also for mental health. There very well may be times when working on your project where you feel really useless or stupid, but these are neither accurate nor helpful thoughts. Problem can be a really demoralizing hobby. Imposter syndrome is rife within our industry, and those feelings are often compounded for underrepresented engineers. I wasn't able to find much research on coping plans that might apply to side projects, but in my experience, these were super helpful. First, talk with people. Talking with people engineers and non-engineers alike can be a huge relief and put things in perspective. And of course, it's fun. I think we could all use a little bit of fun right now. Secondly, find some nice communities online. In addition to being a great place to ask questions, people working on similar things can often understand your frustrations in a deeply validating way. I've had a bunch of super great interactions on the Rust Discord, for example. That said, there are a lot of not nice communities online. I remember I asked a question on tree traversal on Stack Overflow, and the first comment was stop doing this in Rust. Number three, just step away from the computer. Oftentimes, all you need is a bit of distance from what you're working on to help clear your mind and relax you. Not only that, but I found that most of the time, I figure out a tricky problem when I'm washing dishes or about to fall asleep. Now with all these suggestions, there's one thing that I'll advise against, and that is setting external rewards for yourself. Say for a little extra dessert, if you work a bit, a little bit longer. These have actually been shown to reduce intrinsic motivation. The risk is that the rewards can often become the goal in and of themselves, or that the rewards mean less and less over time. The rewards also tend to have less of an effect if you have a very high awareness of them, and we're talking about solo projects here, so that seems very likely. Now, there's a very common acronym SMART that is referenced a lot in articles on successful habit building and goal setting. It was initially introduced by this guy named George T. Doran, a consultant and director at the Washington Water Power Company, in an article intended for managers on how to set corporate objectives, but since its introduction, the ideas have gone through a few changes that better align with personal goal setting for individuals. Well, they weren't derived from academic research, they are still a very useful tool that align with many of the conclusions from various studies on habit formation. Anyway, what are SMART goals? A SMART goal is specific. It's very clear what the problem is, and if your project is intended to solve a problem, it's very clear what needs to get done or what you need to build. A SMART goal is measurable, meaning it should be clear what's required for it to be workable, or how close it is to becoming workable. Notice I didn't say done, more on that later. A SMART goal is achievable. It's something that isn't too far outside the realm of possibility. I'd love to build a laptop from scratch, for example, but for me, that's not exactly realistic. A SMART goal is relevant to you. It needs to be something that you care about, something that motivates you. It's something that's hard, it's really hard to put a lot of time into something that is not directly meaningful to you. Lastly, a SMART goal is time bound. Now, in most invocations of SMART, this refers to a sort of due date, but that often doesn't apply to side projects. It may never be truly done. Additionally, due dates often create unnecessary stress, and this is something that should be enjoyable. Instead, I'd like to think of time bound as referring to your engagement with the project on a day-to-day level. Limit your habitual time instead of your total time. Let's see how SMART connects to root. First, the initial goal was just to write a program that may draw in those tree diagrams from an easy-to-type input. The project expanded beyond that afterwards, of course, but the initial idea was very focused. As I went through adding features to root, I really only thought about everything one small step at a time. This meant that as I wrote more code, the progress felt really real and was directly measurable. I could literally count the number of things that I had added. This sort of measurement elegantly translates to filing issues, creating tickets, or however you like to do task tracking. This can be as simple as a list of a piece of paper, which is what I used, or it can be as sophisticated as an integration with GitHub issues or a clubhouse project. Another natural way to measure progress is with semantic versioning, or whatever versioning scheme you might elect to use. Each release is a concrete indication of progress, whether it is a feature edition, a bug fix, or something else entirely. Both ticket tracking and semantic versioning serve as reminders to recognize achievements that you made along the way, which is very easy to forget, especially when the intermediate product of your project doesn't feel very impressive. When I started planning root, I kept scopes small. I wasn't going to have any command line options. It would do one thing and only one thing. And even that was too hard. Starting in Rust was so difficult that I had to make an initial prototype in Ruby, I primarily language. Root as a project felt very useful to me. It came out of the very real issue that communicating the specifics of tree data structures is super hard. I deal with trees a lot at work, and talking about various corner cases with specific tree structures was always super challenging. In addition to the problem domain, root gave me the opportunity to learn about parsers as well as Rust, both of which I had been partially exposed to, but hadn't really been able to sink my teeth into. That prototype from earlier also aided the motivation on continuing with Rust, as it showed that switching to Rust dropped the execution time from 200 milliseconds to 5 milliseconds. The static type system also caught a number of bugs that would have been a real pain later on. So when working on root, I would really only spend a few hours a night and only two to three nights a week. When I got stuck or frustrated on a problem, I would give it a solid attempt, but if that wasn't enough, I would take a break or stop on it for the day. Typically, I would continue thinking about the problem in the back of my mind, and when I came back to it, my frustration was replaced with excitement, and I would end up solving it very quickly, most of the time anyway. In addition to the smart attributes I just mentioned, I found a few other things really, really helpful. Showing what I was working on to other people was very encouraging, even when they weren't super impressed. It still made what I had made more real to me. Celebrating victory has also made the work more fun. For example, I tracked downloads on Crate's I.O., and I still very vividly remember passing 50 downloads and being wowed that anyone else might find what I had made useful. So we're equipped to look at what happened earlier now with our guarding project. Firstly, the scope of the project just ballooned. What started as a way to track caring for plants became something much bigger. Learning Rust is also a big challenge, and creating a web application is no small feat. In addition to learning Rust, we also needed to create an authentication system, we needed to set up how to handle assets, we needed to deploy to some server, all of that takes time before we can get up and running. Working for all this hours sure was fun, but it's not sustainable. You end up exhausted the next day and associating all of that exhaustion with all the work that kept you up. Also one thing missing from the story is anyone else. No talking with friends to bounce ideas off of, no chatting online when running into roadblocks. Working alone is certainly possible, but it's a tough path to follow. All of this is well and good, but I do need to point out a bunch of things because, well, I don't want people to yell at me, I fear being yelled at so much I even named my now abandoned ultra depressing comic series, please stop yelling online. Really? Anyway, asterisks. Firstly, don't feel like watching after watching this talk that you ought to do a programming side project. It's not for everyone, it's not even for me a lot of the time. That's okay, you can have other hobbies. I imagine a good deal of you write software for your job. If you don't want to bring that home, then don't. Another thing, right now we're living through a historic global pandemic that has dramatically impacted most of our lives. Not only that, but people are becoming even more aware of the very present problem of police brutality against black people in America. As a result, you might feel demotivated to work on a side project right now. Even if you have tons and tons of free time, that's okay, side projects are optional. Also, you might follow this advice, start a project, and abandon it. That's okay, this stuff isn't a silver bullet. No study about any of this stuff has 100% written anywhere. That doesn't say anything about you. Stuff happens. Lastly, again, please don't yell at me, I am very weak and I will be super sad. So in conclusion, make a plan, not a schedule. Build something that you care about. Think small and be kind to yourself. I want to thank everyone that was involved in setting up RustConf. I also want to thank my parents as well as my partner, Anne-Marie. Everyone has been super helpful in getting this talk to this stage. If you want to follow me online, my website is HollaScript.com, that's with a C-H. And my Twitter is HarryBee. So thank you.
|
Have you ever started a solo project that never really came together? Been able to write something up in a weekend but dropped it soon after? Me too! In this talk I go over how I was able to break from that and build my first CLI tool, written in Rust. Drawing from an understanding of habit formation, we’ll examine how to plan projects in a way that keeps you fulfilled and stops you from veering off track.
|
10.5446/52200 (DOI)
|
Hey everyone, my name is Mika and my pronouns are she and her. Today I'm going to talk about my beginnings with game development using the Amethyst game engine. When I first decided to do this talk, I had intended to share a full 2D roguelike game, but I found that my focus was in other areas of game development that I found more interesting to explore. Learning game development and Rust has always been an on and off learning endeavor of mine and I'm excited to share some of those learnings with you today. I'm just a developer looking to talk about one of her many in progress projects. So when I decided to learn a new programming language, I tried to identify criteria outlining some of the things I'd like to get out of it. And since my development experience was primarily in front end web development, I wanted to try picking up another language that was considered low level or whose use cases were designed to work closer to the hardware. Another was documentation. I think most people will appreciate that in any area of software development. And being someone who likes to do independent reading and research on a topic at my own pace, it really helps to have accessible documentation to refer to. And finally, having a welcome community helps lower the barrier to learning. This was actually one of the biggest reasons that got me interested in Rust since the community surrounding it is dedicated to making it a fun and safe place to be. So in general, I was looking to learn a new language that presented not only a challenge, but also a history that emphasized open collaboration. And that's how I arrived at Rust. And it also helps that the language's mascot is pretty cute. So I think like most people who are getting it started with Rust, one of the first things I did to jumpstart my learning was a combination of reading the Rust programming language and trying out some of the concepts in the Rust playground. And this was a good way for me to get a bit of a foundation before jumping into actual Rust projects. And once I felt like I had some of the basics of Rust, I decided to contribute to a few open source projects. One of the first things I did was update the standard library documentation for the string slice type. And later on, I made some contributions to the Servo web browser engine project where I implemented a few of the missing attributes on the mouse event interface. And unfortunately, I haven't been able to contribute to other open source projects this year, but hopefully I can find some time in the future. But of course, there's just so much to know about Rust and so much to learn about it. And it's hard to keep track of what I've learned. And while I was writing this talk, it was actually already difficult to get a grasp of where I wanted to begin and eventually lead into what I've learned about game development. The amount of things to do can get pretty overwhelming. So to help with this, I had to choose an area I was interested in. And so I began my search. And to no one's surprise, I arrived at game development. Video games have always been an important part of my life, and it felt natural to choose this area of software development to sink my teeth into. I wasn't looking to create the next big indie title, but rather just create a few projects that I could lightly work on. And some experimentation had me interested in developing a roguelike. There's a couple good tutorials on how to create your own, and while I was working on my first game, I found that I was actually more interested in creating a 2D rendered game. And I wanted to do this in Rust. And so one of the first places I went to get some information was checking out the Are We Game Yet? website. I thought this was a good place to get some ideas of what I might need to build a 2D game using Rust. But of course, the game development space can be pretty intimidating in itself. And as someone with basic knowledge of computer graphics, amongst other things, it was difficult knowing where to start. I was also interested in drawing some 2D sprites to the screen more so, and so I needed a library that would do most of the heavy lifting for me. And luckily, that is what game engines are for. So when looking for a game engine to use, I was more concerned about finding one that provided accessible documentation and hands-on learning and some examples to help demonstrate certain concepts. And from the Are We Game Yet? website, I was able to find the Amethyst game engine. This is a screenshot of the project's landing page at amethyst.rs. As I researched the project, I found that it checked off everything I was looking for in a game development written in Rust. And the documentation for Amethyst was easily available through the project's website. I never really felt the resources they provided were scattered, which is a huge plus for me. And some of the things I found really nice is that Amethyst provides a book that entails not only how to use the engine, but also talks a little bit about ECS, which is a topic I will talk about later. It also provides links to its communities, online API reference, and also to their examples. And the book also has a section where you can build a Pong clone using Amethyst. This was a really nice hands-on project to help understand the basics of using the engine. And I also felt that it was enough to start experimenting with my own projects once I was finished with it. And finally, there were many implementation examples showing what Amethyst can do. And these examples were readily compilable and were easy to experiment with. Another nice thing about this project was that there are a few games available on its blog that showcase the game engine's features. And after doing some learning and experimentation with Rust, I decided to take what I learned and apply it to what I wanted to make in the first place, which was simply drawing 2D sprites to the screen. I wanted to break down this section into three parts, explaining what ECS is, showing how I went about implementing a 2D sprite animation using this framework, and finally, I'll extend on this by explaining how I went about implementing a camera follow system for the player. So what exactly is ECS? ECS stands for Entity Component System, where an entity in your game represents a single object. An entity often represented with a single ID can be composed of a number of components where a component acts like a container for data that can describe an aspect of an object. And finally, a system is a piece of logic that can operate on one or more entities in the game. ECS is a common pattern used in game development, and it makes it easy to compose objects since components can be arbitrarily added to entities. And so there's that piece of theory. When I first read about ECS, I needed some time to build a simple mental model of what it could look like in practice. And so let's shift into something that's more fun to talk about while also trying to build on this mental model. Let's talk about Animal Crossing New Horizons on the Switch. Animal Crossing is a cute laid back game where you get to build an island society of cute anthropomorphic animals. And in this image and the game in general, we can probably apply some of the ideas of ECS to help build our understanding of it. And there's a lot of things on the island that we can identify as objects, which are represented by entities in the world of Animal Crossing New Horizons. For example, the player character is an entity, Isabella is an entity, Timmy and Tommy are entities, and even this tree is an entity. You get the point. But it feels wrong to describe our island residents as just entities. After all, an entity is usually represented by a single ID that has a number of components associated with it. We need to be able to associate some data with them. And to keep things simple, I wanted to focus on describing data for the villagers in Animal Crossing. For some context, villagers are non-playable characters that can move to your island by either randomly moving there or finding them on another island the player is exploring. And they become residents of the little town you've created. They can also do activities similar to the player, such as fishing, gardening and exploring. And the player can interact with these villagers and even develop friendships with them. And this being my first Animal Crossing game, I thought what made a villager special is that they added more characters to the island by well being cute. And though I've learned recently that each villager shares attributes that are unique to each of them that actually influence how they interact with other villagers on the island. And so here's one of the first villagers that moved to my island. Her name is Flurry and she's a hamster. And the moment I found her while exploring another island, I just had to take her home with me. I just love how she has those cute little blue eyebrows and paws and red t-shirt and she's just the cutest thing ever. Anyway, there are specific attributes villagers have. And I didn't list all of them here since I wanted to keep this simple. Attributes that specifically describe a villager will be its species, personality and hobby. These attributes will be contained by what I will call a villager component. And you'll notice that the name and birth the attributes weren't included because I think these attributes should be included in a separate component, which we will talk about later. So with amethyst, declaring components requires us to define the data being described. And in this example, we declare a villager struct, which contains information about that components, species, personality and hobby types. And once the underlying data for the component is defined, we then have to implement the component trait for the villager. The storage type determines how the component will be stored. And in this example, the dense vector storage type, which is where elements are stored in a contiguous vector, it allows for lower use memory usage and it also like and for also dealing with larger components. And this diagram is to help visualize how the villager component is added to an entity in the game. In amethyst, whenever a new component is created, it's added to a storage responsible for storing components of a specific type. And in this diagram, we can create a number of villager components and add them to specific entities. Here we can see that the entity zero is associated with the villager component that describes flurry and this is similarly done for the other entities representing villagers. And so to do this with amethyst, we have to get a reference to the world, which acts as a container for resources in your game. To create a new entity in the game, we first need to import the builder trait, which allows us to create the entity builder using create entity. Using the entity builder, we can add component to that entity. And in this example, we add the villager component we defined earlier. Finally, we can finish building and get the actual entity by calling built. Now we can extend this by using the name and birthday attributes in another component. I decided to call it the resident component since it generically describes any character on the island. And in this example, I've added another entity, entity three, meant to represent the player character. And since villagers also have names and birthdays, the resident component can also be reused and attached to entities with the villager component. So now let's move into the section on getting a simple sprite cheat animation working using amethyst. But before we get started, I wanted to explain that a sprite cheat animation is simply taking a sprite cheat and changing what sprite image or frame is drawn to the screen in rapid succession. This gives the illusion of movement, much like how one would see with a flip book. So the first thing we should do is describe the relevant components for the sprite cheat animation. The first is the animation component, which has attributes frames, frame duration and index. Frames describe the number of sprite images for one animation cycle. Frame duration describes how long each image should be shown for. An index indicates where in the sprite sheet the first image of the animation is. And the second component is provided by animate amethyst. It's responsible for containing data about the sprite sheet and which image from the sheet to draw to the screen. The data used to describe this is a sprite sheet. And this is a reference to the sprite sheet asset. And in amethyst, these references to texture assets are known as handles, and sprite number is the location of the sprite image in the sprite sheet. Now associating a component with an entity doesn't do much on its own. Our animation needs a way to manipulate what sprite image is drawn to the screen during each game cycle. And this is where implementation of an animation system comes in. In this diagram, we implement an animation system with amethyst by having that system read or write data for different component storages during each iteration of the game loop. This example shows that the animation system being implemented will read from the time resource. Resources in amethyst are containers of data that are not associated with an entity. And then the animation system reads from the animation component storage. And finally, the animation system will write to an entity's sprite render component, in particular, the sprite render sprite number value will be modified to tell amethyst to draw the next image in the animation sequence to the screen. And this is what the animation system would look like using amethyst. Implementing a system involves implementing the system trait on a struct. And this system is then executed during each iteration of the game loop. And when we define the system, amethyst requires us to define a type called system data, which tells the system what data from the engine it should expect to get and how it should be interacting with it. Some of the system data types amethyst provides are read storage, which gives the system an immutable reference to the entire storage containing the animation components. On the flip side, there is also the write storage data type, which gives us a mutable reference to the entire storage containing the sprite render components. And then there's the read data type. And this gives us an immutable reference to the time resource. And now the next step is to implement the systems run method. In our animation systems run method, we get all entities with an associated animation and sprite render component. This is done using join. And this allows for the joining of components for iteration over entities with specific components. And now that we have that entity, we need to find which frame to use for the sprite render's number value, depending on the game's elapsed time. And finally, we can modify the entity sprite render to draw the new sprite image. And here's the final result of an idle animation for the wizard sprite. However, it would be much more interesting if the player could move around some sort of environment. To do this, we can design another system that is responsible for moving the player around. And for this system, our game will read from the input handler resource provided by Amethyst and will also be writing to an entity's transform component storage. The transform component storage is available through Amethyst, and it's a common component to use. This is because transform can describe an entity's position, rotation, scale, and much more. And for this game, we're really only concerned about modifying an entity's position and rotation in response to user input. And now that a system is in place for the player to move around, it might be nice to have another system that consists of a camera that tracks the player's movement within the environment. And at the time of writing this, I found that simply attaching subject tags to particular entities made implementing the system straightforward. In particular, I created a player subject component that I associated with an entity designated as a player. The camera subject component is also associated with an entity designed as the camera view. And the camera follow system would then modify the associated transform component for both the player and camera entities. And the result would then look something like this. Here I have the same wizard sprite moving through the game's environment with a camera following their movements. And there's also a few things I wanted to say about this demo. Earlier in this talk, I mentioned how I dabbled a bit developing a simple roguelike. And one thing that I became particularly interested in is being able to generate a random tile map that would act as the environment the player could move around in. And so as another mini side project that was separate from my project that used Amethyst, I played around with developing a tile map generator that I could use potentially in my game. And the image I have here is a generated background image used in the demo. And for this small project, I used a create called image to extract pixel data from a source sprite sheet and had it put into an image buffer. And this image buffer would then be outputted as a PNG image local to the project directory. The generated map images aren't perfect, but it was a fun way to learn and see different layouts for the game environment. And since these are only images, I'd have to still extend my game and develop some sort of collision system so that the player cannot move through tiles that are considered walls. So as you can see, I wasn't able to complete a full game yet, but being able to complete a small project like this has made the process much more enjoyable. And the biggest takeaway from my learnings was that it's okay to iterate on project ideas. Do it at your own pace and also have fun. And besides coding, another way to keep myself motivated was to document some of what I learned. I did this by writing a blog post as soon as I finished one section of my project. And not only did it help retain my knowledge over time, but it also challenged me to think about how I can communicate what I learned in a way that was helpful not only for myself, but also others. Prior to this talk, I actually wrote a blog post documenting how I implemented 2D sprite animations using Amethyst. And one of the things I really appreciated my past self for doing was taking the time to establish context and also outline her thought process throughout. And this was the most difficult part and also the most time consuming part of writing this other way short blog post because I really wanted to find a perfect balance between providing enough context while also being forthright in my content. And if anyone's interested, the link to this specific blog post is available at mtiggly.dev. And I wanted to touch on writing with the intent to teach someone. Like it's very useful, even if it's just for yourself, because it will help really ease doubts you have about your knowledge. And one way to do this is to keep a series of blog posts and over time you'll have created your own repository of knowledge. And who knows, maybe someone will find what you wrote about really useful. And so that's the end of my talk. I wanted to thank everyone who came to listen and thank you everyone who made this talk possible. And hopefully anyone who is interested in either Rust or game development will be inspired to build something.
|
My First Rust Project: Creating a Roguelike with Amethyst by Micah Tigley One of the biggest challenges to learning Rust is finding a project to continuously practice that newfound knowledge on. As someone with a background in front-end web development, the world of Rust was new and exciting to someone with limited systems programming experience. There were a number of open-source projects to choose from and so many areas to explore, it was a bit intimidating. I finally settled on Rust's game development community. This talk looks at my journey diving into game development with Rust by building my first roguelike game with the Amethyst game engine.
|
10.5446/52201 (DOI)
|
Hello everyone, this is the controlling telescope hardware with Rust talk. We're going to be talking about telescope hardware and controlling that hardware with Rust from a desktop and interacting with our hardware and how to do that. So who am I? I'm Ashley, pronouns are sushi her. All the code in this talk is going to be on GitHub at the project name scopy. It's a project name that I've been using for this code. And on the right here is me in Seattle last winter taking an image of the elephant's trucking ambula. And the result of that session was this image. So on the left, you have a very bright star that is blowing lots of solar radiation and whatnot at the gas cloud on the right. More dense parts of that cloud are more resistant to being pushed. So you get these globs of dust remaining behind. And if you really squint, then those globs of dust sort of look like an elephant's trunk, hence the name elephant's trunk nebula. Cool. So we're going to be talking a little bit about how astronomy and astrophotography works in particular my setup. So we have some context of some technical words and an understanding of what's going on before we jump into the code. This is a Newtonian telescope, which is what I have. Light comes in left, bounces off a parabola mirror on the right. Then in the middle or on the left here is a secondary mirror that redirects the light up top. Then normally if you're looking through this telescope with your own eyes, you would have a lens here. But instead we have an image sensor that sits directly on the focal point and collects the data directly and then we can pull the data off this camera. What that actually looks like in real life is like this. So light goes in on the left, the mirror is on the right. Then buried in the middle here is that second mirror, which redirects the light up top to that camera. Below that is the mount system. So we have two motorized mounts. The top one is called the declination axis. Then below that is the right ascension axis. And these two motors combined let us look anywhere in the sky. So what actually are right ascension declination? So the sky needs a coordinate system to be able to uniquely identify any point in the sky. We start off with using the Earth's latitude and longitude as a base. Latitude is easy. We just say that anything over the North Pole, directly over the North Pole, is 90 degrees North. Anything over the equator is 0 degrees. Anything over the South Pole is negative 90 degrees. But the equivalent of longitude is a little bit more complicated since the Earth obviously spins and we can't have our coordinate system spinning throughout the sky. So instead the Earth uses the 0 degrees longitude is defined as being at Greenwich in the UK. And we define 0 degrees of right ascension as it's called as being the direction of the Sun at the spring equinox. It's kind of a weird definition but that's what it is. So cool. Now we're able to uniquely identify any point in the sky and have that coordinate system. Importantly, right ascension declination are aligned with the Earth's rotation axis. So our own motors need to be aligned with that Earth's rotation axis. Which means that we need to tilt our motors equivalent to the latitude that we're currently at. So Seattle is 47 degrees North and so I've configured my telescope here to be used at Seattle to tilt those axes to be 47 degrees. Cool. So now we're able to point our telescope anywhere in the sky and image whatever we want to. Now let's actually start getting into code. We're going to look into the motors first to see how we want to control those motors. So we do a Google search, figure out the hardware that we have and we dig up the specification manual that our hardware provider provided. So it's a serial communication protocol. That means it's a tube that you send bytes down and you get bytes back and it's all serial and it's very, very simple and it's pretty great. So in this document are things like 9600 bits a second, no parity, etc. There's various serial communication parameters. If we scroll down in this document we have various commands that we can send to the motors. So go to is the term in astronomy to say, hey telescope point to this point in the sky. And the way we do this is to send down the serial pipe, the letter R and then a right ascension encoded in hexadecimal followed by a comma and then the declination encoded in hexadecimal. Then in response the hand controller sends back a hash symbol to say, hey everything's okay, things succeeded. If we scroll down some more, retrieving data from the mount is a very similar system. We send the letter E and we get back a right ascension declination in hexadecimal again that says where the telescope is currently pointing in the sky. So that's really useful if things are drifting or whatever. Cool. So we have a serial API and we want to write this in Rust. And so we look online and we find the serial report crate which is a fantastic crate. It's super fun to use. Let's us use serial reports from Rust. So what that looks like is if we want to open our connection to the mount we give it a path and we get back our serial report. Then we're allowed to do some configurations. For example, set that bit rate, set the parity and here we're setting the time out. So if the serial report doesn't respond for three seconds then we kill the connection. Cool. How do we actually send bytes down this serial report? It's pretty simple. There's this write all method and we just dump the bytes in this buffer down the pipe. Cool. Everything is a little more complicated. So the trouble is that if we want to read something we can't just read up until we get a hash because the data that we're getting might accidentally include a hash symbol or rather the ASCII value of a hash in the data that we're receiving. And that's no good. And so instead we need to get the length of the data that we expect to receive up front. And the way we do that is that we have a pass and a buffer and then we use that buffer's length as the amount of data that we're expecting to read. So first we read the buffer here. So that reads n number of bytes. And after that we read the hash that is at the end of every single command. The mount always responds with a hash after doing whatever to make sure that everything's okay. Cool. So connecting those all together we go back to our PDF document and we go look up the go to command. So that is sending an R followed by the write ascension declination in hexadecimal. And here we're doing that. So we format a string with the letter R followed by the write ascension with comma and declination. So then we write that, send that to the mount, and then we read back that single hash. So remember a read method then would read zero bytes and then read that hash for us. So cool. Getting data from the mount is a very similar system. We send an E and then the buffer size that we expect is 17 bytes. So that's the 8 bytes of the hexadecimal followed by comma followed by the 8 bytes of the write ascension declination. Cool. Then the rest of this method is parsing to grab those two hexadecimal numbers out of that blob of bytes. Awesome. So now we're able to point the telescope wherever we want it to do. That's awesome. So the next step is to look at the camera. And so again we look up our camera's manufacturer, look up some specification PDFs, and we stumble upon this PDF and we start reading through it. And turns out the way that we interact with this camera is a C API. So the vendor of this camera provides this PDF as well as a C library that we link to with our Rust program. This function opens up the camera, so initializes the USB ports and everything. And then it returns a handle. And what that handle is is we then later give this handle back to the API and says, oh, you meant that camera that you previously opened up. So we're able to interact with a single camera based on this handle. Then how we actually get data off of this camera is this function. So it gives us the width, height, bit depth, et cetera. And then the final parameter is the actual bytes data that we want to get. And so the C API fills in those bytes and then we have our image. So fantastic. Now we want to write this in Rust. Thankfully, the vendors also provide a C header. And so what we could do is use something like BindGen to convert the C header into Rust. But instead, I've opted just to just handwrite all these functions because there's only a dozen or so of them. And it was easier to handwrite these functions. So for example, you can see that open function that we mentioned earlier. At the top here, we're linking the library that was provided to us. And so that pulls in that code that was provided. So how do we actually use this? So C obviously expects null terminated strings. So we use the C string type to be able to pass in the string to this function. And then we get back our handle. And so then we do some error checking of if that handle is null. And then we have our camera open. So that's great. So now we want to get some data off of that camera until we get the width, height, et cetera. And then we have buffer for that C API to be able to fill in that data. Something that I found really useful when working with C libraries is I have this check function because C libraries usually return integers as error codes. And so I have this check function that takes that integer, compares it against error code, see if it's a success or not, and then returns a rust results. And that allows us to use that question mark operator. And that's really, really nice and ergonomic to be able to interact with these C libraries. Cool. So now we're able to point the telescope wherever you want to. We're able to get image data off of our camera. And that means we all have all that we need to create beautiful images. So this is an image that I took of the Eagle Nabila M16 at the center. You can see three little nutes. Those are the pillars of creation, the very famous Hubble image. And I was really, really excited to say that I actually captured this with my own amateur telescope. And I saw a thing that Hubble saw. And it was super exciting for me. So yay. So cool. The trouble is that just interacting with the hardware isn't enough. We also have to slap a UI on top of this. So this is a pretty garbage UI. I just slapped it together. But whatever. It has all the data that I need. Unfortunately, this isn't actually live in data. This is just a simulation that I was running with my program. And that's because at midnight, Stockholm looks like this because the sun is so, like, doesn't get below the horizon at all. So we can't actually do any astronomy right now to get any live data screenshots. But whatever. So the fact that we're putting a UI on top of these hardware functions is kind of problematic. Because for example, downloading the data off of the camera can take a couple seconds. And we don't want to lock up our UI thread for a couple seconds because that would be a really bad user experience, even if it's just me using the program. So we want to have a thread dedicated to doing the camera work or doing the mount interaction work. And then we have our UI thread. And so I want to share some experience that I had working with hardware and what patterns worked really well when working with, like, blocking hardware and stuff like that. So this is a pattern that I really like. So we have our dedicated mount thread here that's just spinning in a loop. And then we have a MPSC channel that instead of sending data over, we send lambdas or delegates. And then we call that delegate with our local mount. And so that means that this mount can only be accessed by this thread. But the fact that we're sending over lambdas means that we can write really nice shim layers like this, that if we want to slew the or go to the mount to where we want to, we just need to write the shim. Then the UI thread can call this method. It just sends this lambda over the MPSC channel, which doesn't block at all, and returns immediately, freeing up the UI thread to do whatever it wants. Then when the mount thread can get around to it, then it actually does that slew operation, calls this lambda with the mount that is a thread local. And Rust guarantees that this mount will never escape that thread. It's really, really nice to be able to have those guarantees in this Rust world. Unfortunately, the camera is a little bit more complicated, and I do have an enum instead of sending over lambdas. This is because the camera is a little bit more complicated. So here we're setting a control, and the control in this sense means something like the camera's gain or the exposure or stuff like that. And it turns out that the camera hardware really doesn't like it when you set the gain in the middle of an exposure, like things crash and stuff. And I just figured this out by using the camera, being in the field, doing some testing. And I figured out that you want to cancel the camera before setting any control. So cancel the current exposure. And so now we have this method. We have to make it a little more complicated. And so now we don't use a lambda anymore. We use an enum. So, cool. So now we have all this flexibility to do whatever we want. So that was some threading talk, and now I wanted to talk about some of Rust's performance, because it's really important. So if you see in this image, you can barely, barely see the Elfin's trunk nebula that I was imaging here. It's a little faint, but hopefully you can see that. But this image isn't actually what's coming off the camera. What's coming off the camera looks like this. No idea if you can actually see what's going on in this image, but it's basically a black screen with a few tiny bright white dots. And the reason for this is the camera has a 16-bit depth. 12 bits actually, but it returns it in the data of 16 bits. So that means it has a very high dynamic range. And if we want to remap that to 8 bits of what our display is actually display, then we're only going to see the very, very brightest things, which in this case are tiny little dots of stars that are showing up, and we can't see the nebula at all. So that means we have no idea if the nebula is in frame or not. We have no idea how good of a quality it is. So what we want to do is when we're previewing this image in the app, we save all of the raw data itself, but when we're previewing, we want to remap this data to be able to see all the interesting parts of the image and not just have a black screen with a few white dots. So we do that by computing some statistics about the image first. And so here we compute the mean and standard deviation of the image. These look like really, really simple functions. You just loop through the data, add them all up, and then computing standard deviation is really, really simple. The trouble is that these images are not small. They're 32 megabytes big. And when I implement this in C-Sharp, it took like a couple seconds to turn through all this data. And that's a problem when you're doing things like planetary astronomy, because in planetary frame rate is like 10 millisecond exposure, and you want to collect as many, many exposures as you can. So you want to have these previews come in immediately. You want to process as quickly as you can. And so Rust absolutely blazes through these functions, and it's really a joy to be able to have these previews just snap in as soon as they're available. So cool. So what that processing actually is doing is saying, OK, so we're assuming that the brightness distribution of our image is Gaussian, which is, it isn't, but whatever, close enough. And so we want to say, OK, let's make negative one sigma on this image to be black, and positive one sigma to be white, and clip everything outside of that, and just see the interesting data that's in the middle of this bell curve. But we also want to be able to shift it a little bit, because maybe the interesting data is on slightly high side of the bell curve or whatever. So we have these two parameters of our sigma level of how big we want our gap to be, and then an offset of saying, like, hey, we want the mean to be 20% bright instead of 50% bright or something like that. So we have this hunk of math, because I want to have these shifts to be a straight remat that's a linear equation that's just y equals mx plus b, and that's really, really fast to do. So on the top, we convert our sigma and our mean offset to be a mx plus b equation. So we have a scale and an offset. Then at the bottom, we just do a multiply and an add, which is really, really fast. So that's really cool. So then, when we want to adjust our, for example, sigma or mean location parameters to adjust our view, we don't actually have to modify the image at all. It's still in GPU memory, and we just adjust this uniform to be able to display it however we like. So that means we have real-time adjustments of saying, like, oh, I want to see the bright parts of this image, or I want to see all the dark parts of this image, and it's super snappy and great. And the reason that we don't have to modify it is apparently OpenGL supports 16-bit grayscale textures, which is awesome, so we can just upload the raw data to the GPU and have that all work. So that's super fantastic. So now you have a great application that can preview all of our astronomy images and save them all and image whatever we would like to in the sky. Great. So yeah, that's my presentation. All the code that I talked about is all on Scopey on GitHub. Here's my Twitter. My email is just wildcard.capurey.com, whatever creative stuff you want to put in there. And then if you want to see more space picks that I've taken, capurey.com has all of my stuff. Thank you very much. All these links will be in Discord or YouTube hopefully wherever you're seeing this. So yeah, thank you.
|
Dive into hardware APIs as we explore how the hardware of an 8-inch aperture telescope can be controlled with Rust. In this talk, we'll explore using serial ports to drive complex motors and machinery, and using a native C API to download images from the camera and control the dozens of settings it has. As a bonus, learn some basics of astrophotography and see some pretty pictures I've taken with my telescope! All code is shared on github.
|
10.5446/51968 (DOI)
|
Well, hello and thank you. First of all, Alex has not told you the full story. This is now a completely valid Rust program and you actually don't need to program anything anymore. You can just have everything generated. We just lost our jobs. So who am I? I'm Florian, this is Twitter and company links on this slide. I'm here for Mozilla actually. Mozilla sent me here as a part of the TechSpeaker program. And this is actually my second time. I'm on this stage. I was here before at RubyConf. I was 10 years in the Ruby community talk. Now I'm five years in the Rust community, something like that. And I met a lot of familiar faces. I'm happy to be here again. I started Rust in 2015 mostly out of personal curiosity. And because I'm not a good solo learner, I immediately started the building user group which is one of the biggest and most active around the globe. I'm organizing two Rust conferences, Rust Fest and OxydizeConf. I can have the emphasis on how much of a great job this is. And I'm a product member since 2015, mostly in the community team. And now also moving to other spaces. And I can definitely agree with Nico that the Rust project is very, very easy to join on all positions. If you have something that specifically, this is specifically of your interest, it's very, very easy to get in touch with people and get involved. And this is also my first talk on a Rust conference. I've organized them. I have never given a talk in a Rust conference. And what I want to do is I want to make you competent at reading and writing ware classes because they are important, but have some subtleties to understand. So just as a bit of background, what is a ware class? Let's say we have a pretty simple data type. We have a point. Point has an x and a y value. And we have a function that's called print point that just takes a reference to that point and prints it out. Nothing important. The most important thing here is this point implements debug because our print LN macro requires things that we print out using the call-on-question mark syntax. Sorry. It requires them to be debug. But this version of this function just takes a bare point. So what if I have squares? So I have a square. It has an x and y position and a width. That's enough to describe a square. And I can write a print square function. That takes a square and just does exactly the same thing. That's very repetitive. And computers are very good at repetitive. We are not as good as computers here. And the unifying thing between these two is not that they're shapes. For the printing function, what we need is actually that both of them implement the debugging representation by using the derived debug statement up there. So we can rewrite that function as fn print takes a shape s, takes a borrow of a shape s and prints it out. But the important thing is we need to indicate to the components some way that we only accept types that we can actually turn into this debug representation that we do using the where clause. So we put there where s is debug. The thing under number three is called the trade bound. And the thing under number one is called a type variable. So now we can both create a point and a square and use both of them out using print. The important part here is the first statement, the first print statement, internally calls a function that's called something like print point. And the second something like print square. Important thing here is those are actually different functions. The compiler just generates them for us. Logically speaking, there's an infinite number of those. There's an infinite number of those functions for every type that is debug. But in case of this program, we actually only compile two. So from Rust point of view, those functions will be compiled on need. We are actually using print point. We are actually using print square. These two are needed. These two are going to be generated. There are other places where the where clause can be stated. For example, we can create a struct that is generic, that inside has two somethings, preferably numbers if we are representing a point. And we can also say p needs to be debug. Most of the stuff that's important, you can totally exercise on function. A bit more detail here. So we have a function that has two generics, t and e. And I can express what I want from them. First of all, what the where function, the where clause gives me over the other shorthand syntax is that I can split it up. So it can state bounds multiple times. I can say I want t to implement a trade call trade and a trade call other trade. Or I could have another type e that implements trade plus other trade. So I can do both of these. It's functionally the same, but for ordering your code, it just helps a lot. The important thing in the where clause is the left things are concrete types. What this says is I have a function and for every pair of types, t and e, that fulfill the bounds to the right, there are, I can compile this function for any two exact types. Before, because the left hand is an actual type, this for example works. I can say I have a function takes into string t, where string, which is the standard string of the Rust library, implements from t. So I can say I take any kinds of types that can be turned into a string in that way. Where clauses are important for a couple of reasons. For example, they are the way how we can, for example, constrain on the type that an iterator returns us. So again, using just the bare debug trade and printing stuff out to the console, I can have a function called debug eater that takes any kind of iterator. The iterator trade is again standard iterator, but it has an associated type, which is the item that it's going to return. I don't know what the item is, but using the where clause, I can at least say the item must be in the set of types that do implement debug and that can be printed to the console. And to my knowledge, that was also one of the arguments why the where clause was actually introduced to be able to actually do. So there's a couple of patterns on how we can work with that. So when I do generic programming, I'm always talking about this idea of constraints. And for example, if we take some of the standard library types, for example, result, result being either this work that gives me result back, that's the okay variant of the result enum, or it gives me an error back that doesn't give me a lot. And if you have a look at the standard library implementation of results, there's a couple of functions defined on this. There's actually a lot of functions defined on this type. And the most basic ones are, for example, here, the implementation for any kind of result value or error, I have two functions, is okay or is error, just tell me which of the variants that was. That does need no knowledge about what T and E actually are. But if I want to call on wrap, for example, if I have result on wrap, I'm going to have a panic message that includes the debug representation of my error that I had. And there, I have an implementation that says, for example, T, E, result, T, where E is debug, and then there's a couple of functions that rely on error actually being debugable. The other way around, there's implementations for if T, the value when everything worked, if T implements default, so it has a default value, result gains a function that's called unwrap or default, which doesn't panic, but instead in case of an error gives me the default value back. So we are gradually constraining the result type more, and the more we know about it, the more functions get unlocked on it. And gradually unlocking features based on these kinds of bounds is a common API strategy in Rust, and you can see that all through the standard library. Let's talk about another piece of standard library, which is the threading API. I have an example here that does, again, something rather useless. I have a vector, and I spawn a thread which takes that vector and counts the elements of it and gives me the result back. The threading API is generic because I can push anything inside and get anything back. So coming up with a first attempt at writing that on my own, you could come up with something like this. You have the thread spawn function. It takes an F and a T type. It takes in the F and returns me the T. Why is it F? It takes a closure. It includes all the data that my closeover, in this case the vector, and returns me whatever that closure has actually returned as the join handle. The join handle will give me information of has that thread actually run to completion, was there an error or whatever, but it gives me the result or you pack. The problem is what fits these slots? What can I put into that F and that T? The problem there is if I'm spawning a thread, a classic problem is all the data that is put on that thread should preferably not reference anything out of the context where the thread was spawned in. Why? Because both are going to run in parallel, and the data in the first part might be removed, changed, whatever, and is independent of the second. So I want to have this idea that Niko introduced today in the morning of actually giving complete packages over and also getting complete packages back and forcing the programmer to actually move everything over and not half of it. The problem is if I would write the API like this, that were actually possible because I have no other constraints on F and T than that they are actually generic types that might be. They might end up being references. They might be any kind of that address type. So what I can do, I can use the where clause here to express additional things here and a way of expressing what I just said, you can actually send that stuff over to a thread. Rust has a marker for that. Rust has a marker for everything that is actually allowed to do that traveling. That's called send. And the other thing is I can bound this value with a special lifetime that is called tic static. And the bound, any type plus tic static essentially means it must own all data so you can give up complete ownership like the party that spawns the thread must give up complete ownership of all the data of all the payload it puts on the thread. And after the thread is done, we want to remove it and throw it away. We also need to have the ability to bring everything back that you want to bring. So this F send plus static bound expresses this quite neatly. There was an issue in my teaching in like two or three years ago that people felt like expressing plus tic static is cheating because you don't take part in the references game. It's actually a meaningful statement. If you don't want to deal with references, if you don't want to deal with borrows, just express tic static and that's probably a very valid solution to your problem and just deal with ownership. Another problem that might arise when you start working with ware clauses and start trying out bounds is we often have wrapper types. And wrapper types sometimes express some kinds of expectancies over what you put inside. Here again, I have a wrapper and again, just using the debug trade as an example trade. Now I have another function that takes that wrapper and just unwraps it and takes the inner part. But because I have expressed that wrapper can only have types with a certain bound inside. If I want to write take inner, I also, to fulfill that, I also always have to constrain the generic type or the generic variables of take inner as debug. The function itself doesn't actually use it though. So that's a problem. I want to try, I want to write this function. It just does nothing more than taking that structure, taking out what's ever inside. But I do have to express an additional bound just to basically reiterate that wrapper already expects the inner part to be debug and nothing else is allowed. So what can I do against that? There's a pattern in which you can write this wrapper in a way that it actually itself can contain any type, but you can effectively only create that wrapper in a fashion where it is debug. Just zoom in here. The way it works is you don't allow users to directly construct the type and you're only giving them a constructor and that constructor carries the bound that I want to express. What you can then do is later if you actually want to use this bound, for example by putting in an inspect method on that wrapper, you just need to restate it. But because the compiler actually follows all those variables through the whole program, it will still know the thing you put into the wrapper was a type that is debug and no one can effectively create any of those wrappers that don't have this bound at all. Which I can refactor this in a way by actually putting that bound again on the implementation instead of on all the functions themselves. That's for you to decide how you want to do this. But this allows me to write the take inner function in a way where I have to express this bound because at that point I don't care about it. It's absolutely not new. So where classes are primary refactoring targets? If you want to change your program and make it more flexible, more constrained depending on what your goal is, your where classes are your primary refactoring target. Also, don't start out extremely generic. Don't try to write generic code out of the blue. The pattern that I've shown in the beginning, figuring out that two or three functions are basically the same and you probably can move them into a generic function is a very useful one. Probably the thing that I've shown in the beginning, that's something you could write immediately. But any kind of more complex system starts simple. Start building up, start building into genericness. Also, finding the right level is important. That's a classic in programming. A lot of application programming, like outer edge application programming, suffers from the fact that people try to do it to make it too generic. But for library authors, for giving flexibility and for communicating intent to the outside, this is very useful. So always be aware where you are and whether that's actually needed. And in the end, you might end up writing terrible clauses like this. For one of my current projects, I'm going to refactor that tomorrow. It's literally work in progress. Some advanced examples. Traits and bounds can be used to express relationships between types. This becomes very useful. This is one of my favorite Twitter accounts. It's called Happy Automata or vaguely reassuring state machines. It generates state machines like this. And I would like to write one of those state machines myself. And state machines usually have states. Some of those states are terminals. I skipped having state in this example just because that would be wrote and would just make the example bigger. But what I can have is I can write, for example, two traits. The first one being transition to S, another state. Express in the work laws, S needs to be a state. Self also needs to be a state. So I can make a statement about the type that this trait is going to be implemented on and give that a function called transition. And transition will take the current state, actually owning and thereby destroying it, and return the next state. And another trait, terminate, that can only be called in terminal states that just removes the state machine and cause it done. And I can create myself three states. The state machine that I'm creating here is basically there's a start, there's a mid state that I can actually loop into again and then there's an end. So I have start, loop, and stop. I implement state for them. This is actually an empty trait. It's just a marker to make the compiler know these types are states. There's no, again, no functionality from that. And stop is actually the terminal state. That's where I'm ending. And then I can implement transition to loop from start. I can go from loop to start, transition to loop for loop. So I can go back into it again, transition to end for loop. Sorry, there's an error on the slide. And implement terminate for end so I can actually call. So I can actually stop. That means I cannot terminate that state machine if I have not ended up in the end state. So I need to make sure that people actually, that the users of the state machine actually follow through and take this process to the end. The code here is simple. This is one pattern how to write this. There's a whole blog post on this by a community member called Hoverbear. And the setup and the programming of this is a little involved, but the usage is rather straightforward. The reason why I have to type the left side, so I need to actually express what the next state is going to be is exactly because I have this loop state where I can either loop again or go to the end state. And this is something where I actually have to tell the type checker I intend to be this, this to be the next state. The two comments in the middle, those wouldn't compile. So if I would try to terminate while I'm still in the loop state, that doesn't work because loop doesn't implement terminate. And if I would try to transition from the loop state again to start, that doesn't work. That's not defined. Second example that comes also out of my work is how about like talking about what's stored in databases. Let's say I have a storage trade. My storage can be queried for example for a model. That takes the storage, both the storage, but I also give it an ID. So it takes the storage and reads out the model under this ID. The problem with this definition is I could try to get anything out of that. I could try to read strings, vectors, mutixes, whatever, because every type is valid to fill that variable. And this is what constraining gives me, constrained to the things that are meaningful. And I can define a code trade here that says stores model. And I can also constrain the stores model trade to it can only be implemented on storages. Then I can extend my function with where self actually stores that model. And now I've defined a function that communicates. You can try to query models out of this, but only if it actually stores them. And you force the implementer to actually declare what's stored here, to declare that to the compiler. So for example I can have a users database, this user's database implements storage by what do I know, an SQLite, back database, source, grass or whatever. I have a user model, and for example another avatar model, so they can have users and the avatars in the same database. And then I implement stores user and implements stores avatar for users database. And this becomes pretty natural. So having, you can only query things where the storage actually stores. And in the end I have things, I can write the program that, well down looks like this. I can have, I can connect to my database and I can try to query it. I actually have to state at this point what model I actually want to query out of it. So here the type checker won't help me because I've actually said I want to have multiple options and I need to decide. So I'm saying query a user out of that database. But if I would try to query a string out of it or any kind of other type it would tell me, no I actually don't store this. The error message in this case would be that the storage actually doesn't implement, the function exists but it doesn't implement the right bounds. So the conclusion out of all of this. Getting comfortable with all the stuff that ware clauses give you is important. Take it slow though. So don't start writing big ones just right out of the door. Exactly picking which constraints to need where is key. And spending some time actually figuring out what you need. Potentially over constraining first later maybe removing some of the constraints may help. There's also an API concern around this. If you further constrain a ware clause you are breaking your previously committed API. If you're widening it, if you're allowing more people to call it or this to be called with more types you're not breaking your external API. And there are creative patterns of interplay with which you can start declaring to the compiler how your systems work to be found in all that. Yeah. Thank you. That's it. Thank you.
|
Rust expresses trait bounds using the where clause. It is an essential tool for all generic Rust programming. Yet, many Rust programmers don't know about the full expressiveness of it! This talk guides us through most of the features and shows us how to creatively use where clauses to both keep our code clean and flexible, but also sound and hard to use in a wrong way. This talk will teach us the basic building blocks to not get lost in a forest of constraints. Follow us on Twitter: https://twitter.com/rustlatamconf
|
10.5446/52174 (DOI)
|
So, hi, people. I'm here to talk about interop with Android, iOS, and WebAssembly in the same project. So, basically, a Rust library that you can compile to all of those three platforms, and on each of them, you basically have an app that consumes those libraries. That's the idea. Oh, yeah. So, interoperability. When people talk about this, usually they're talking about FFI. So, does anybody know what FFI is? Like, raise your hands? That's a lot of people. So, FFI stands for foreign function interface. So, basically, when, like, a programming language has, like, it's standard library, it's syntax, a lot of things, and one of the things that the most popular ones have is a way to talk to the external world. So, basically, let's say you have created those awesome Rust functions. I think, actually, it's not very easy to see. So, let's say you have created those awesome Rust functions, and you want to use them, and you need to create, I don't know, Node.js app or whatever, and you want to use them. So, you can basically go to use, create your program and consume them using the FFI. So, Rust has a way to externalize things and also consume things from the external world. Also, other languages have that feature. So, that's the idea. But why use this? You can use for various reasons. One of them is performance. So, let's say you're doing a Python program. Like, people who do data science, they usually use some common libraries to do some things. And basically, they use some Python libraries, like, to do common things to this domain. And some of those libraries are actually written in C. And basically, they do Python wrappers around the C code and get some C performance out of that. So, that's pretty cool. You can also use existing code or libraries. So, let's say there's this new language and you can't create a GUI application for it. So, there's no libraries for it. So, you can basically get create a wrapper around, I don't know, some CGTK library using FFI. So, basically, you talk to a library externally and you can do that on your cool new language. You can also do FFI to actually concentrate common logic in one library. So, basically, at my company, Pagarmie, it's a payments company in Brazil. We're actually, we have an Android app, an iOS app, a web app and a.NET app that all of them need to talk to that payment terminal. That thing you put your credit card and lose money, you know? So, basically, we have a C library that actually builds the commands for that machine, for that payment terminal. But it's actually written in C. We're rewriting it to use Rust for various reasons. I won't get into that here. But you can basically concentrate this common logic in one library and consume it in various places. But how to do this in Rust? So, Rust has various features to do this or tools, whatever. One of them is the external function declaration. So, when you put this external thing before the function declaration, it basically means you want to use it externally. So, this C string basically means you are using the C ABI. You can use other ABI's. But by using this, because most languages have a way to talk with C, when you externalize using the C way, they can read your Rust library. So, it's pretty easy to use. Also, there's the, you can create an external block that actually does the opposite. So, you consume external functions on Rust. So, let's say you are operating on iOS and you need to use a random function. There are a lot of options there. And you just create the signature because the implementation is elsewhere. That's the idea. And there's the no-mango attribute that basically does not mango the function name. So, that helps other compilers that are not Rust compiler to actually understand the function name. And that's the idea. There's also RAPC, which basically layouts in memory your structs or enums in a C way. That helps other languages to actually understand it. But this is not necessary. I find it very useful for enums. But you can use opaque pointers. I will get into that later. But this is actually not necessary for a lot of cases. So, okay. So, how is the workflow when you're building a library for an external, like, app or something? You have your Rust library with those external declarations. So, you have a lot of functions there that you want to use externally. You build them to the target you want. So, when you do only cargo build, it actually uses your OS and architecture and everything to build it. But you can use this target flag that helps you to build for other platforms. Then you will get a.so,.a. It depends on the target. This file basically will contain your functions that you want to externalize. And then on your program in another language, you consume those functions via FFI. So, that's the idea. So, for Android, for example, you have your Rust library. You compile for those three targets. That's what people use for Android apps. And then you get on your target folder, three folders, each one for each one of those targets. But you get those.so files in all of them. And these.so files, you link them on your Android app. And you need to create this JNI libs folder to actually link them. But this JNI is basically a part of Java, like in simplified terms, a part of Java that deals with the external role. So, it's Java's FFI. And you can maybe create a Java or Kotlin wrapper around those functions, those external functions. So, let's say you have this construct. It doesn't actually have any data in it, but it could have. It doesn't really matter, for example. But then you need to... So, to consume on Java, you actually need to create a function on JNI's way of understanding things. So, there's the no-mango. So, here's the no-mango attribute, the external function declaration that I've said before. And the name has this awesome name. Basically, that's the way Java can understand your function. So, you start with Java, then the package name. So, cool, com, cool, cool Android project, and then your class, and then a method. So, that basically acts as a class on this package. And then this function receives an environment and a disk. And the environment, basically, it's this object or struct or whatever that you can do things on Java. So, you can basically create objects, you can call methods on classes, you can do all kinds of stuff there. And here, this function basically instantiates this struct, this cool struct, and it creates this cool struct, allocates it in the heap using box, and it returns a pointer. But instead of returning a simple pointer, I convert it to a long type, which is basically just an integer, a big integer. And that way, you can have the memory address on Java, and when you're going to call methods on this struct, you can basically pass the long to other JNI functions that you've created and you convert them back to pointers so you can use them in Rust normally. That's the idea. Okay. So, on Java, you need to create just the signature of that external method. So that's the cool class, that's the package name, the method, the return type, it's all like that one. And you can also, like, I don't know, create a private value that contains that cool struct, a memory address. This is Kotlin, by the way, I did not say that. But, yeah. For iOS, basically, it's almost the same thing, but you get.a files, and there's a tool I recommend using, it's called CargoLipo. You can do the CargoBuild target for everything too, but as from I heard, it's pretty complex to actually make it work for iOS. So just use CargoLipo, it would work perfectly. But instead of creating a class, for example, here, we created this class to actually have the signature on Java's side. For iOS, you need to create a C bridging header. So basically, it's a C header file with a bunch of declarations of your external functions, of your Rust functions, whatever. Okay. So, same struct here, no mangle and external syntax. And instead of returning a long type, because we are not on Java, we return a pointer here. And, like, we just use this box, which just gets the pointer and returns to the other side. However, like, this works because we are using the C, ABI, and like, Swift and Objective C interact well with C. So that's basically just like a normal regular C pointer. And you need to have this bridging header, which has the struct, and the method that returns that struct. So this struct on Rust, it actually didn't have any data, but here I'm doing, like, even if it had, you can just create an Opaq pointer definition. Basically, an Opaq pointer is just void pointer. I don't know if you have programming in C, but basically, it's a pointer that can point to anything. So, yeah. And for WebAssembly, it's honestly the easiest of them, like, to do the interop. There are two libraries that help a lot with that, which are Wazm.BindGen and Wazm.Pack. Thank you, Alex. But basically, Wazm.Pack, it's more like a command line tool that you can build your project. And Wazm.BindGen is more like a library. It actually has a CLI, but I honestly don't use it. So you can build your library, and it will create, instead of the regular target folder, you get a PKG folder that basically will contain your.wazm and.js file. So it actually already creates a wrapper around those external functions on JavaScript. And the cool thing about this PKG folder, it actually, you can create a web app that actually consumes it like it was any PM package. So it's very good for using, like, it's very good. Okay. So the same function, the same type here. But here you can see that we can just return the type itself, like we can just call myStrokeNil just by putting this, importing the Wazm.BindGen attribute. And we need to put on our types and on our functions. So you can basically write regular Rust code, and this attribute will read your code and convert to that dirty box thing. So it will deal with a lot of stuff for you. So I recommend using this library. Yeah. So, okay. But I've shown all of this, but how to do all of those three at the same time on a Rust library? There's one more tool. It's needed to do this, which is the conditional compilation attribute. So there's this thing in Rust called CFT. It's an attribute that you can pass a basically something to evaluate. And if it's true, it will actually compile the line below. Like, for example, here I have a Wazm module that will only be compiled if I'm actually targeting for Wazm32. So if I build for Android, I won't get WebAssembly code. That's like what I want. So one way to, like, because Wazm.BindGen, it's kind of intrusive because you have to put on your types and on your external functions, you need to put conditional compilation on your imports. And also there's this CFG ATTR that evaluates an expression like the other one, but actually puts an attribute to the thing below. So that's very useful and makes you not compile Wazm things to iOS or other platforms. And also to actually not import Wazm.BindGen and other platform specific things, you need to put this on your cargo file. It's the same thing, but you use it for your dependencies. So here there is Wazm.BindGen and GSCIS. GSCIS is a library inside of Wazm.BindGen that has a lot of JavaScript types. So like functions, you have U8 int, U8 array, you have a lot of things in there. Okay. So how to structure the REST library? So at my company, we started with this idea that actually it's almost this, what we are doing. On the LibREST file, we put our common modules, like, of course we don't have a common module, but we have a lot of modules that are core REST library. And then we can put, like, create other modules that use that common module and create a public interface for a specific platform. So on this module, it will basically import this and create functions that WebAssembly can understand and things like that. Just the same for iOS and Android. So that's the idea we had. But, and this is the, like, on each of them, you create a public interface for the specific platform you were building for. And then, like, for today, I didn't, like, our projects unfortunately has to be closed at source because of recommendation issues. However, I did a project just for today, which is about the Doom Fire. I don't know if you have played Doom. It's a game. And basically, there's this fire on the menu screen that you can, like, I basically made iOS, Android, and Web app that rendered this on screen. And the logic of doing this is on REST. So the render part is on the platform specific app. So the idea is that basically you will have a vector of pixels, of bytes or numbers, whatever. And each of them represents a number from 0 to 36. 36 is very hot. Zero, it's not that hot or cold. And you basically have a vector like this. Of course, I don't, I don't exploit it as just a pixels variable. I have a struct that has that. But that's the idea. So basically the platform only reads this pixels array or vector, whatever, and reads all of those numbers. And that's the intensity of the fire that it needs to render on screen. That's the idea. I won't get into many details of how it works, but, okay. So basically this is the library. Those are the common modules. And here are the three specific modules. Here's the Web assembly one, like I've shown. And the Android one. But here you can see there's no iOS. Basically for, like, we discovered when we were working on our project at Pagami that basically exporting functions for iOS, for dot net, for some specific platforms, it's very similar. It's basically the same thing. So here I created just a standard FI module that probably can be consumed by other platforms that are not iOS. So that's the idea. This is the iOS one, but probably works for other platforms too. And here the Android one actually uses the standard FI one. I don't know if you remember, but I've shown that on the Android one you basically receive long types and convert them to pointers and or return long types that will be the memory address of your structs so that you can call methods and create structs. And these Android modules just convert the things that are pointers to logs or vice versa to using the standard FI module. So here's the library score logic. It's basically a struct with a lot of methods. And that's, like, the idea. Here's the main logic. And here's the thing I've shown about only compile wasm things when it's needed. Okay. And then the WebAssembly interface is very simple. It just, like, creates the struct, call methods on it. Like, you can just receive mutable references, immutable references. It works perfectly. It's very good. And, but here there's one interesting thing. Basically we needed, like, I used callbacks in this project. And there's this function type which is from JSX. That means this is a JavaScript function. And I created a function that basically converts that function, that extern type to a Rust type. Like, something that's not platform specific. Because you don't want to keep on your common modules on your core library things that are platform specific. So this function basically is something like this. It receives the JavaScript function. It returns a box with anything that implements the function trait. And it creates this box and puts a closure in it that calls that JavaScript function. So that way the core Rust library doesn't know what is a JavaScript function. It only knows things that are Rust types. And here's basically how you use it. Of course we don't use this in Rust. We are using on the other side. But here it's basically you call this function and it will give you the array of bytes that you can render on screen. That's the idea. Okay. So for iOS there's the standard FFI module which creates the board and which that structure has shown. Returns the pointer and the function basically receives pointers and call methods on them. But of course do no checks. Like, check that the pointer is not no. Then you can maybe convert to something you can call methods. And then you need to free things. So basically there's this box into RAL from RAL function that receives a pointer and then converts into a box type. So a box type when it's out of scope it will free that memory. So that's how you free things. And the Android interface is just what I've shown before. But it receives like, for example, here this function returns a J long. And basically it calls the standard FFI function that creates a pointer and then returns it as a long type. And then the function is like, every time you want to call method on it you basically receive the integer which represents the memory address and then convert it as a pointer. So that the standard FFI module will do the no checks and call methods on it. Okay. This is the project. Like you can check it later. You can like see, oh, this is weird. This is wrong. This is good. And yeah, that's the project. And that's basically it. So some limitations of doing interop like with Rust and also interop in general. So for example, for WebAssembly at least using the Wazmabind Gen project which is very big and the most used project, you can't use generics or type parameters yet. So you can use them but you can't export any type that uses it. Also, there's no lifetime parameters. So goodbye references. And also for like for Android, one thing about Java that at least like there's no function pointers. So a way to call a callback is basically either you receive an object and call a specific method on it or you can maybe on the Kotlin or Java interface receive a closure, store it and you have a method that calls it but you call that on Rust. That's kind of confusing. But yeah, there's no function pointer so you have to do some things to actually call callbacks. On iOS, you can do callbacks but every time you create a, you pass a callback to C because on iOS it will think you're just doing interop of C. You basically can't use data that's on an object. For example, you can't use, if you're on a class, you can't use anything that's related to self or this. You only can reference things that are either received on the closure or are static data. So that's kind of annoying too. And for all of them, at least not much for WebAssembly but for the others ones, there are very few examples online. Like most of the examples are either you pass a string to the other side and come it back or you add two numbers or you print something. So it's very annoying to find something useful online. There are some references though. I will show them later. But it's very annoying and was hard to do. At least for me and my colleague. And the biggest challenge in my opinion was to actually match types between the host and guest language. You can, like, I literally spend hours trying to, like, okay, I have a function that needs to receive an array of bytes. And then I try a type, I try another type, I try another type. And you can keep hours trying to actually find what types actually match because you don't have examples online. So at least I had this problem. It was very annoying. And some references that help a lot, at least helped me. For WebAssembly, there's this book that basically creates the Conway's game of life. I really recommend it. It actually gets kind of deep on WebAssembly. I recommend it. There's one Android and one iOS tutorial by Mozilla that actually do help you to do all of the linking process and how to build your library. It helped me a lot. But it is a hello world. But you need to do this, like, to start actually doing FFI with Android and iOS. And for JNI, I didn't find many examples using REST. So I found this book that was very helpful. That's for C. But some of the interfaces that JNI has on C, they actually are similar to the REST one. So I recommend this book. And then there are two talks that talk about doing FFI with REST. I recommend them. They are on YouTube. One of them gives a lot of tips and shows, like, do no checks and do things like care, take care because you're dealing with pointers. And yeah. And an overview of REST, of REST to FFI basically shows what I've shown, like, some parts. I recommend watching them. And I would like thanks to some people. Most of them, actually, all of them in this slide are from my company, which is Pagarmi. She, like, Marcella, basically, I don't have a Mac. So to compile to iOS, I basically borrowed a Mac from her, a 2011 Mac book. So thank you. Also to Filipe, basically, he's the guy I'm working with in Pagarmi at this project. Not the Doom Fire one, like our payment terminal project. Those people are actually from a company, too. They, I've presented to them a lot of times, like, trying to, like, practice. And Alan, who is here, he offered to actually, so I could use his computer, but I ended up bringing mine. So thanks. And also Pagarmi, which is a company I work for, a lot of people helped me there, like Camila, Marcella, a lot of people there, Susanna. So, and also thanks to those three people. So Cassiano, I don't know him, but he actually has a project of Doom Fire, doing the Doom Fire on Android. Because I don't know Android and iOS. I don't know how to render things on screen. So basically, his project helped me doing the rendering part. Would you look the same for iOS? And Filipe, he has a project that collects a lot of Doom Fire algorithm implementations on various languages, and you can check it out later. On my project, there's a link to it. And that's it. Thank you. Thank you.
|
The talk will show and explain how pace and his collegues were able to create a library in Rust which had to be compiled to Android, iOS and WASM at the same time.
|
10.5446/52175 (DOI)
|
Hey, everyone. Thanks for having me. Yeah, I'm, my name is Without Boats. As in, yeah, seen Boatess. You can, most people call me Boats, which is actually even more confusing. I am a researcher at Mozilla. I work on Rust. It's my full-time job. It's pretty cool. I think, so this talk is going to be about sort of, it's a feature that I've been working on for about the last year and a half. And before me, other people were working on for, I mean, honestly, since before, the very beginning of Rust, before 1.0. But before that, I just wanted to thank the organizers for having me and for all of their work in putting this conference on. I think, I, maybe someone's already said pointed this out, but I believe this is the first Rust conference outside of either the United States or Europe. And so, as someone who works on Rust, I'm really, like, excited and glad to see our global community, like, thriving and growing. And it's really cool to see all the conferences that are happening this year. Come on to the technical stuff. So, the feature that I've been working on is this thing called Async Await. It's sort of going to be probably the biggest thing that we do in the language this year. We're planning to ship it sometime in the next few months. And it's the solution to this problem that we've been struggling with for a really long time, which is how can we have a zero cost abstraction for asynchronous I.O. in Rust? So, I'm going to explain what zero cost abstraction means in a moment, but first, just to kind of give an overview of the feature. So, Async Await, it's just these two new keywords that we're adding to the language, Async and Await. And so, Async is this modifier that can be applied to, like, functions where now the function, instead of when you call it, it runs all the way through and returns. Instead, it returns immediately and returns this feature that will eventually result in whatever the function would return. And inside of an Async function, you can take the Await operator and apply it to other features, which will pause the function until those features are ready. And so, it's this way of handling asynchronous concurrent operations using these annotations that makes them much easier to write. So, here's a little code sample just to sort of highlight and explain the feature. This is in, like, basically just an adapter on, like, a kind of ORM type of thing, it's handwritten, where you have this getUser method, which takes a string for a username and then returns this, like, user domain object by querying the database for the record for that user. And it does that using Async.io, which means that it's an Async function instead of just a normal function. And so, when you call it, you can Await it. And then, just to walk through the body of this method, you just, the first thing is it creates the SQL query, interpolating the username into, you know, select from user's table. And then, we query the database. And this is where we're actually performing some I.O. So, query also returns a future because it's doing this Async.io. And so, when you query the database, you just add this Await in order to Await for the response. And then, once you get the response, you can parse a user out of it. This user domain object, which is, you know, part of your application. And so, this method is just sort of like a toy example for the talk. But what I wanted to highlight in it is that the only difference really between this and using Blocking.io are these little annotations where you just mark the functions as being Async. And when you call them, you add this Await. And so, it's, you know, relatively little overhead for getting, using Non-Blocking.io instead of Blocking.io. And in particular, in Rust, the really great thing about our implementation that makes me really excited about it is that our Async Await and futures are this zero-cost abstraction. And so, zero-cost abstractions are sort of a really defining feature of Rust. It's one of the things that differentiates us from a lot of other languages is that we really care about when we add new features that they are zero cost. We didn't actually come up with the idea. It's a big thing in C++ also. And so, I think the best explanation is this quote from Bjarne Streestrup, which is that a zero-cost abstraction means that when you don't use it, you don't pay for it. And further, when you do use it, you couldn't hand-code it any better. And so, there's these two aspects to a zero-cost abstraction, which is that the first is that the feature can't affect, can't add cost to people who aren't using the feature. So, we can't add this global cost that will slow down every program in order to enable this feature. And the second is that when you do use the feature, it can't be slower than if you didn't use it and then you feel like, oh, well, I would like to use this really nice feature that makes it easier, but it will make my program slow. And so, I'll just write this thing by hand. It will be a much bigger pain. And so, I'm going to kind of walk through the history of how we've tried to solve async.io and Rust. And some of the steps along the way we had features that failed the zero-cost test on both principles. So, the specific problem that we're trying to solve is async.io. And so, normally, I.O. is blocking. So, when you use I.O., you'll block thread and that will stop your program and then have to be rescheduled by the OS. And the problem with blocking I.O. is it just doesn't really scale when you have been trying to serve a lot of connections from the same program. And so, for these kinds of really high-scale network services, you really need some form of non-blocking or asynchronous I.O. And Rust in particular is supposed to be designed for languages that have these really high performance requirements. You know, it's a systems programming language for people who really care about the computing resources they're using. And so, for Rust to really be successful in the network space, we really need some sort of solution to this asynchronous I.O. problem. But the big problem with async.io is that the way it works is that when you call the I.O. system call, it just returns immediately and then eventually you can continue doing other work. But it's your program's responsibility for figuring out how to, like, schedule, calling back to the task that you had to pause on when you're doing the asynchronous I.O. And so, this makes, you know, writing an async I.O. program much more complex than writing one of these is blocking I.O. And so, a lot of languages that are trying to target these like scalable network services have been trying to come up with solutions for this problem that like take it, make it not the end-users problem, but a part of the language or part of the libraries. And so, the first solution that Rust started with was the study of green threads, which have been successful in a lot of languages. And so, green threads basically look like blocking I.O. They look like spawning threads and then just blocking on I.O. And everything looks exactly the same as if you were using the native OS primitives, but they've been designed as part of the language runtime to be optimized for this use case of having these network services which are trying to spawning, you know, thousands, tens of thousands, maybe millions of green threads at the same time. A language I think right now that is like having a lot of success with this model is Go, where they're called Go routines. And so, it's very normal for a Go program to have tens of thousands of these running at the same time because they're very cheap to spawn unlike OS threads. And so, just the advantage of green threads is that the memory overhead when you spawn an OS thread is much higher because you create this huge stack for each OS thread, whereas green threads, the way they normally work is that you will spawn a thread that starts with a very small stack that can grow over time. And so, spawning a bunch of new threads that aren't using a lot of memory yet is much cheaper. And also, a problem with using like the operating system primitives is that you depend on the operating system for scheduling, which means that you have to switch from your program's memory space into the kernel space and the context switching as a lot of overhead if you want to start having, you know, tens of thousands of threads that are all being like switched between really quickly. And so, by keeping that scheduling in the same program, you avoid these context switches and that really reduces the overhead. And so, green threading is a pretty good model that works for a lot of languages, both Go and Java, I believe, use this model. And Rust had it for a long time, but removed it shortly before 1.0. And the reason that we removed it is that it ultimately was not a zero cost abstraction, specifically because of the first issue that I talked about where it was imposing costs on people who didn't need it. So, if you just wanted to write a Rust program that didn't need to use green threads, it wasn't a network service. You still had to have this language runtime that was responsible for scheduling all of your green threads. And so, this was especially a problem for people who were trying to embed Rust inside of like a larger C application. It's one of the ways that we've seen a lot of success in people adopting Rust is that they have some big C program and they want to start using Rust. And so, they start integrating a bit of Rust into their program. It's just writing one section of the code in Rust. But the problem is that if you have to set up this runtime in order to call the Rust, then it's too expensive to just have a small part of your program in Rust because you have to set up the runtime before you can call the Rust functions. And so, shortly before 1.0, we removed green threads from the language. We removed this language runtime and we now have a runtime that is essentially the same as C. And so, it makes it very easy to call between Rust and C and it's very cheap, which is one of the key things that makes Rust really successful. And having removed green threads, we still needed some sort of solution to async.io. But what we realized was that it needed to be a library-based solution. We needed some sort of with providing a good abstraction for async.io that was not a part of the language, it was not a part of this runtime that came with every program, but was just this library that you could opt into and use when you needed it. The most successful library-based solution in general is this concept called futures. It's also called promises and JavaScript. So, the idea of a future is that it represents a value that may not have evaluated yet. And so, you can manipulate it before you actually have, before the future is actually resolved. Eventually, we'll resolve to something, but you can start running things with it before it's actually resolved. And there's a lot of work done on futures in a lot of different languages and they are a great way for supporting a lot of combinators and especially this async await syntax that makes it much more ergonomic to build on top of this concurrency primitive. And so, futures can represent a lot of different things. So, async.io is kind of the biggest, the most prominent one where you maybe make a network request and you immediately get a future back, which once the network request is finished, will resolve into whatever that network request is returning. But you can also represent things like timeouts where a timeout is just a future that will resolve once that amount of time has passed. And even things that aren't doing in the I.O. or anything like that, where just CPU intensive work, you can run that on like a thread pool and then just get a future that you hold on to that once the thread pool is finished doing that work, the future will resolve. The problem with futures was that the way that they've been represented in most languages is this callback based approach where you have this future and you can schedule a callback to run once the future resolves. And so the future is responsible for figuring out when it resolves and then when it resolves it runs whatever your callback was. And all these distractions are built on top of this model. And this just really didn't work for us because there's a lot of people experiment with it a lot and found that it just was forcing way too many allocations. Essentially every callback that you tried to schedule had to get its own separate like create object heap allocation. And so there were these allocations everywhere, these dynamic dispatches, and this like approach failed the zero cost abstraction on the second principle where now you were, it was not affecting people who weren't using it, but if you did use it, it would just be way slower than if you just had it written something yourself. And so why would you use it? Because if you wrote the thing yourself would be much faster. But I hope it's not lost. Sorry, I'll set up a bit of a cold. This really great alternative abstraction that was pole based. Yes, we've arrived at this model. I really want to give credit to Alex who's here and Aaron Turan who came up with this idea where instead of futures scheduling a callback, there instead you pull them. And so there's this other part of the program called an executor, which is responsible for actually running the futures. And so what the executor does is it pulls the future. And the future either returns pending that it's not ready yet, or once it is ready, it returns ready. And this model has a lot of advantages. One advantage is that you can just cancel futures very simply because all you do to cancel them is you stop polling them. Whereas with this callback based approach, it was really difficult once you've scheduled some work to cancel that work and haven't stopped. It also really enabled us to have this really clean abstraction boundary between different parts of the program. So most futures libraries come with an event loop. And your futures are scheduled across this event loop, but this way of doing I.O. and you really don't have any control over it. But in Rust, we have this really clean boundary between the executor, which schedules your futures, the reactor, which handles all of the I.O. and then your actual code. And so the end user can decide which executor they want to use, which reactor they want to use, giving them the kind of control that is really important in a systems language. But the real advantage, the most important thing about this model is that it enabled us to have this really perfect zero cost way of implementing futures, where each representative is this kind of state machine. And so the way this works is that when the futures, the code that you write gets compiled down to, it gets actually compiled into native code, where it is this kind of like state machine where it has one variant for each pause point, for each I.O. event, essentially. And each variant then has the state that it needs to resume from that I.O. point. And that is represented as this essentially like an enum, where it's this one structure where it's the variant, the discriminant, and then a union of all of the states that it could possibly need. And so this is an attempt to visually represent that abstractly, where this is a, this state machine has, you know, you perform two I.O. events, and so it has these different states. And each state it has this, the amount of like, space it needs to store everything you would need to restore to that state. And the entire future is just a single heap allocation that's that size, where you just allocate that state machine into one place and a heap. And it's just no additional overhead. So you don't have these like, all these boxed callbacks and things like that. You just have this like perfect, really truly zero cost model. So I feel like that is usually a bit confusing to people. So I tried to, this is my best keynote, visually represent what's going on, which is that the, so you spawn a future and that puts the future in the heap in this one location. And then if there's a handle to it, that's you start with the executor, the executor pulls the future until eventually the future needs to perform some sort of I.O. In which case, the future gets handed off to the reactor. And the reactor, which is handling the I.O., registers that the future is waiting on this particular I.O. event. And then eventually when that I.O. event happens, the reactor will wake up the future using the waker argument that you passed in when you pulled it. And so waking the future up, passes it back to the executor. And then the executor will pull it again. And it will just go back and forth like this until eventually the future resolves. And so then when the future finally resolves and evaluates its final result, the executor knows that it's done. And then it drops the handle and drops the future and the whole thing is finished. And so it forms this sort of cycle where you pull the future, wait for I.O., wake it up again, pull it again, on and on until eventually the whole thing is finished. And this model ended up being quite fast. This is the sort of the benchmark that was posted in the first post about futures, where benchmarked features against a lot of different implementations from other languages. Hire is better and futures is the one on the far left. So we had this really great zero cost abstraction that was competitive with the fastest implementations of async I.O. in a lot of other languages. But of course the problem is that you don't want to write these state machines by hand. You have your whole entire application status of state machine is not very pleasant to write. But that's where the future abstraction is really helpful is that we can build these other APIs on top of it. And so the first solution that we had was this idea of futures combinators where you can build up these state machines by applying all of these methods of the future, sort of similar to the way that iterator adapters like filter and map work. And so this function, it just, what it does is it requests rustling.org and then converts that the response to a string. So instead of returning just a string, it returns a future of a string because it's going to be an async function. And it has these futures in the body that it's going to be calling and those are going to actually be the parts that do some I.O. And then they're all sort of combined together using these combinators like and then in map. And we build all these combinators like and then map, filter, map error, like all kinds of different things. And this works. It has some downsides, especially these like nested callbacks, which can be really difficult to read sometimes. And so because you know it has these downsides, we also started at it, tried to implement an async await implementation. And so the first version of async await was not part of the language. Instead, it was this library that provided this through like a syntax plugin. And this is doing the same thing that the previous function did. It just fetches rustling and turns it to a string. But it does so using async await. And so it's much more like straight line, looks much more like the way normal blocking I.O. works, where just like in the example that I showed originally, the only real difference is these annotations. And so the async annotation turns this function into a future instead of just returning immediately. And then the await annotations await on these on the futures that you actually construct inside of the function. And await under this poll model, desugars to this sort of loop where what you do is you just pull in a loop. And every time you get pending back, you yield all the way back up to the executor that you're pending. And so then it waits until it gets woken up again. And then when finally the future that you're awaiting finishes, it finishes with the value and you break out of the loop with the value. And that's what these await expressions evaluate to. So this seemed like a really good solution. You know, you have this async await notation, which is compiling down to these really awesome zero cost futures. And so it was sort of released futures into the wild and got feedback. And that's where we ran into problems. That essentially anyone who tried to use futures quickly ran into very confusing error messages, where it would just kind of complain about how your future isn't static or doesn't implement this trait. And it would be sort of this baffling thing you didn't really understand. And the compiler would like make helpful suggestions, which you would just follow until eventually compiled. And so you would be like annotating closures with move, and you would put things into reference counted pointers and clone things and this and that. And it all felt like you were adding all this overhead to the thing that you didn't seem necessary. You didn't understand why you had to do it. And also when you were done, your code ended up looking like soup. And so tons of people were bouncing off of futures. And it didn't help that the combinators produced these really huge types where your entire terminal would be filled up with the type of one of your combinator chains. Or it would just be like, you know, an end then of an end then of a map error of a TCP stream and so on. And you know, you have to dig through this to try to figure out what the actual error that you encountered was. And I found this quote on Reddit, which I think really beautifully sums up all of the complaints about futures, which is that, you know, when using futures, the error messages are inscrutable. Having to use ref cell or clone everything for each feature leads to over company code. And it makes me wish that Russ just had garbage collection, which is, yeah, not great feedback. So looking at the situation maybe a year, a year and a half ago, it was clear that there were sort of two problems that needed to be solved in order to make futures more usable for people. And the first was we needed better error messages. And so the easiest way to do that is to build the syntax into the language and then they can hook into all of our diagnostics and error message, you know, support so that you can have really good error messages for async await. But the second was that most of these errors people are running into were actually them bouncing off a sort of obscure problem, which is called the borrowing problem, where there was this fundamental limitations in the way that futures were designed that made it something really common pattern was not possible to express. And that problem was that you can't in the original line of futures, you could not borrow across an await point. So if you were to await something, you couldn't have any references that were alive at that time. And so when people were having these problems that they were bouncing off of what they were actually ultimately what they were doing was they were trying to borrow while they were awaiting and they couldn't do that. And so if we could just make it so that was allowed, then most of these errors would go away and everything would be much easier to use. And you could just kind of write normal REST code with async and await and it would all work. And these kinds of borrows during await are extremely common because just the natural API service of Rust is to have references in the API. And so, but the problem with futures is that when you actually compile the future, which has to restore all that state, when you have some references to something else that's in the same stack frame, what you end up getting is the sort of self-referential future. And so here's the some code from the original like get user method where we have this SQL string and then when we call query with it, we use pass a reference to the SQL string. And so the problem here is that this reference to the SQL string is a reference to something else that's being stored in the same future state. And so it becomes the sort of self-referential struct where you have these are the fields of the future in theory if it were a real struct, you would have the database handle that you were in the self aspect of it. Then we'd also have the SQL string and the reference to the SQL string, which is ultimately a reference pointing back to a field of the same struct. And self-referential structs are sort of a really hard problem that we don't have a general solution for. And the reason that we can't allow you to have references to the same struct is that when you move that struct, then what happens is that we make a new copy of the struct in the location that you're moving it to and the old copy becomes invalidated. But when you make that copy, the reference that was self-referential is still pointing to the old copy. And that's becomes a dangling pointer and it's the kind of memory issues that Rust has to prevent. So we can't have self-referential structs because if you move them around, then they become invalidated. But what made this really frustrating in the futures case was that we actually don't really need to move these futures around. So if you remember the model where the future is in the heap and a handle to it is getting passed back and forth between the reactor and the executor, the future itself never actually moves. And so it's totally fine for the future to contain self-references as long as you never move it and you don't need to move it. So we really needed to solve this problem with some way to express in the API of futures that while you're pulling it, you're not allowed to move it around. And then if we just could express that somehow, then we could allow these kinds of self-references in the body of the future and then we could just have these references in your async functions and everything would work. And so we were working on this problem and ultimately we came out with this new API called PIN. And so PIN is a sort of adapter around other pointer types where they become this append reference instead of just a normal reference. And append reference, in addition to whatever other guarantees it has, it guarantees that the reference will never, the value of the reference is pointing to will never be moved again. And so it guarantees that it's going to stay in the same place until eventually it gets deallocated. And so if you have something in your API which says that it has to be taken by PIN, then you know that it will never be moved again and you can have these kinds of self-referential structs. And so we changed the way that futures work so that now there's just being a boxed future, it's a boxed future behind a PIN. So we know that wherever we boxed it up, put it in the heap, it's guaranteed now by part of the PIN API that it will never move again. And then, you know, when you pull the future instead of just passing a normal reference to it, we pass the PIN reference to it. And so the future knows that it can't be moved. And the trick here that makes this all work is that you can only get an un-PIN reference out of a PIN reference in unsafe code. It's an unsafe function to be able to do that. And so the API looks roughly like this, where you have PIN which is just, you know, a wrapper around a pointer type. It doesn't have any runtime overhead or anything. It just like demarcates it as being PIN. And then a PIN to box can be converted into a PIN reference. But the only way to convert a PIN reference into an un-PIN reference is to use an unsafe function. And so what we did was then we just changed the futures API so that instead of taking a normal reference, it takes a PIN reference. And otherwise, the API is the same as it was before. And this is essentially the API that we're going to be stabilizing. And so with that change, this code from the first example just works the way that it's written. And so you can just write code exactly the way you would write it with blocking I.O. Add these async and await annotations. And then what you get is this, you know, async I.O. with this really awesome zero cost abstraction where it's basically as cheap as if you hand wrote the state machine yourself by hand. So the situation today, PINning was stabilized in the last release about a month ago. We're in the process of stabilizing the future API. So probably in 135, maybe it will slip in B136, which is, you know, in about two or three months basically. And then we're hoping sometime this year we'll have async await stabilized, hopefully by the end of summer even, we're going to have this stabilized and so that people will be able to write non-blocking I.O. network services using this syntax that makes it very similar to writing with blocking I.O. Looking beyond that stabilization, we're also already starting to work on these sort of more long-term features like streams, I think is probably the next big one where it's an async future. So a future is just, you know, one value, but a stream is many values being yielded asynchronously. It's essentially like an asynchronous iterator. And so you'll be able to like loop asynchronously over a stream and things like that. And it's very important for like a lot of use cases where you have like, you know, streaming, HTTP, WebSockets, HTTP to push requests, that kind of thing where instead of having a networking, like RPC model where you, you know, make a network request and get a single response, you have streams of responses and requests going back and forth. There's also today a limitation that async fn can't be used in traits. There's a lot of work being done in the compiler to make it so that it would be able to support that. And looking out even beyond these features, someday we want to have generators where, sort of similar to generators in like Python or JavaScript, where instead of just having functions that return, you can also have functions that yield. And so then you can resume them again after the yield and you can use these to write functions as a way of writing iterators and streams, similar to how an async function lets you write a function that's actually a future. But I guess just sort of recapping the real critical insights that led to this sort of zero cost async Ion model where that first was this pull based version of futures, where we were able to compile these futures into these really tight state machines. And then secondly, this way of doing async wait syntax where we're able to have references across the wait points because of pinning. Supposed to say thank you, I don't know how it became an X. Thank you.
|
Boats is a member of the language design, library, and cargo teams of the Rust Project. They are Senior Research Engineer working on Rust for Mozilla Research. Contributing to Rust since 2014, they've done design work for significant language extensions like generic associated types and constant generics. For the last year, Boats has been focused on adding async/await syntax for non-blocking I/O to Rust. Follow us on Twitter: https://twitter.com/rustlatamconf
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.